Jan 31 02:31:35 np0005603663 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 02:31:35 np0005603663 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 02:31:35 np0005603663 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 02:31:35 np0005603663 kernel: BIOS-provided physical RAM map:
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 02:31:35 np0005603663 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 02:31:35 np0005603663 kernel: NX (Execute Disable) protection: active
Jan 31 02:31:35 np0005603663 kernel: APIC: Static calls initialized
Jan 31 02:31:35 np0005603663 kernel: SMBIOS 2.8 present.
Jan 31 02:31:35 np0005603663 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 02:31:35 np0005603663 kernel: Hypervisor detected: KVM
Jan 31 02:31:35 np0005603663 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 02:31:35 np0005603663 kernel: kvm-clock: using sched offset of 4108458130 cycles
Jan 31 02:31:35 np0005603663 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 02:31:35 np0005603663 kernel: tsc: Detected 2800.000 MHz processor
Jan 31 02:31:35 np0005603663 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 02:31:35 np0005603663 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 02:31:35 np0005603663 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 02:31:35 np0005603663 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 02:31:35 np0005603663 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 02:31:35 np0005603663 kernel: Using GB pages for direct mapping
Jan 31 02:31:35 np0005603663 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 02:31:35 np0005603663 kernel: ACPI: Early table checksum verification disabled
Jan 31 02:31:35 np0005603663 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 02:31:35 np0005603663 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 02:31:35 np0005603663 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 02:31:35 np0005603663 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 02:31:35 np0005603663 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 02:31:35 np0005603663 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 02:31:35 np0005603663 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 02:31:35 np0005603663 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 02:31:35 np0005603663 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 02:31:35 np0005603663 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 02:31:35 np0005603663 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 02:31:35 np0005603663 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 02:31:35 np0005603663 kernel: No NUMA configuration found
Jan 31 02:31:35 np0005603663 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 02:31:35 np0005603663 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 02:31:35 np0005603663 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 02:31:35 np0005603663 kernel: Zone ranges:
Jan 31 02:31:35 np0005603663 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 02:31:35 np0005603663 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 02:31:35 np0005603663 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 02:31:35 np0005603663 kernel:  Device   empty
Jan 31 02:31:35 np0005603663 kernel: Movable zone start for each node
Jan 31 02:31:35 np0005603663 kernel: Early memory node ranges
Jan 31 02:31:35 np0005603663 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 02:31:35 np0005603663 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 02:31:35 np0005603663 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 02:31:35 np0005603663 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 02:31:35 np0005603663 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 02:31:35 np0005603663 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 02:31:35 np0005603663 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 02:31:35 np0005603663 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 02:31:35 np0005603663 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 02:31:35 np0005603663 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 02:31:35 np0005603663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 02:31:35 np0005603663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 02:31:35 np0005603663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 02:31:35 np0005603663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 02:31:35 np0005603663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 02:31:35 np0005603663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 02:31:35 np0005603663 kernel: TSC deadline timer available
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Max. logical packages:   8
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Max. logical dies:       8
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Max. dies per package:   1
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Max. threads per core:   1
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Num. cores per package:     1
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Num. threads per package:   1
Jan 31 02:31:35 np0005603663 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 02:31:35 np0005603663 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 02:31:35 np0005603663 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 02:31:35 np0005603663 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 02:31:35 np0005603663 kernel: Booting paravirtualized kernel on KVM
Jan 31 02:31:35 np0005603663 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 02:31:35 np0005603663 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 02:31:35 np0005603663 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 02:31:35 np0005603663 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 02:31:35 np0005603663 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 02:31:35 np0005603663 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 02:31:35 np0005603663 kernel: random: crng init done
Jan 31 02:31:35 np0005603663 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: Fallback order for Node 0: 0 
Jan 31 02:31:35 np0005603663 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 02:31:35 np0005603663 kernel: Policy zone: Normal
Jan 31 02:31:35 np0005603663 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 02:31:35 np0005603663 kernel: software IO TLB: area num 8.
Jan 31 02:31:35 np0005603663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 02:31:35 np0005603663 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 02:31:35 np0005603663 kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 02:31:35 np0005603663 kernel: Dynamic Preempt: voluntary
Jan 31 02:31:35 np0005603663 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 02:31:35 np0005603663 kernel: rcu: #011RCU event tracing is enabled.
Jan 31 02:31:35 np0005603663 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 02:31:35 np0005603663 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 31 02:31:35 np0005603663 kernel: #011Rude variant of Tasks RCU enabled.
Jan 31 02:31:35 np0005603663 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 31 02:31:35 np0005603663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 02:31:35 np0005603663 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 02:31:35 np0005603663 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 02:31:35 np0005603663 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 02:31:35 np0005603663 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 02:31:35 np0005603663 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 02:31:35 np0005603663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 02:31:35 np0005603663 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 02:31:35 np0005603663 kernel: Console: colour VGA+ 80x25
Jan 31 02:31:35 np0005603663 kernel: printk: console [ttyS0] enabled
Jan 31 02:31:35 np0005603663 kernel: ACPI: Core revision 20230331
Jan 31 02:31:35 np0005603663 kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 02:31:35 np0005603663 kernel: x2apic enabled
Jan 31 02:31:35 np0005603663 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 02:31:35 np0005603663 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 02:31:35 np0005603663 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 31 02:31:35 np0005603663 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 02:31:35 np0005603663 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 02:31:35 np0005603663 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 02:31:35 np0005603663 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 02:31:35 np0005603663 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 02:31:35 np0005603663 kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 02:31:35 np0005603663 kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 02:31:35 np0005603663 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 02:31:35 np0005603663 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 02:31:35 np0005603663 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 02:31:35 np0005603663 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 02:31:35 np0005603663 kernel: active return thunk: retbleed_return_thunk
Jan 31 02:31:35 np0005603663 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 02:31:35 np0005603663 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 02:31:35 np0005603663 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 02:31:35 np0005603663 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 02:31:35 np0005603663 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 02:31:35 np0005603663 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 02:31:35 np0005603663 kernel: Freeing SMP alternatives memory: 40K
Jan 31 02:31:35 np0005603663 kernel: pid_max: default: 32768 minimum: 301
Jan 31 02:31:35 np0005603663 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 02:31:35 np0005603663 kernel: landlock: Up and running.
Jan 31 02:31:35 np0005603663 kernel: Yama: becoming mindful.
Jan 31 02:31:35 np0005603663 kernel: SELinux:  Initializing.
Jan 31 02:31:35 np0005603663 kernel: LSM support for eBPF active
Jan 31 02:31:35 np0005603663 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 02:31:35 np0005603663 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 02:31:35 np0005603663 kernel: ... version:                0
Jan 31 02:31:35 np0005603663 kernel: ... bit width:              48
Jan 31 02:31:35 np0005603663 kernel: ... generic registers:      6
Jan 31 02:31:35 np0005603663 kernel: ... value mask:             0000ffffffffffff
Jan 31 02:31:35 np0005603663 kernel: ... max period:             00007fffffffffff
Jan 31 02:31:35 np0005603663 kernel: ... fixed-purpose events:   0
Jan 31 02:31:35 np0005603663 kernel: ... event mask:             000000000000003f
Jan 31 02:31:35 np0005603663 kernel: signal: max sigframe size: 1776
Jan 31 02:31:35 np0005603663 kernel: rcu: Hierarchical SRCU implementation.
Jan 31 02:31:35 np0005603663 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 31 02:31:35 np0005603663 kernel: smp: Bringing up secondary CPUs ...
Jan 31 02:31:35 np0005603663 kernel: smpboot: x86: Booting SMP configuration:
Jan 31 02:31:35 np0005603663 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 02:31:35 np0005603663 kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 02:31:35 np0005603663 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 31 02:31:35 np0005603663 kernel: node 0 deferred pages initialised in 10ms
Jan 31 02:31:35 np0005603663 kernel: Memory: 7763608K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Jan 31 02:31:35 np0005603663 kernel: devtmpfs: initialized
Jan 31 02:31:35 np0005603663 kernel: x86/mm: Memory block size: 128MB
Jan 31 02:31:35 np0005603663 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 02:31:35 np0005603663 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 02:31:35 np0005603663 kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 02:31:35 np0005603663 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 02:31:35 np0005603663 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 02:31:35 np0005603663 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 02:31:35 np0005603663 kernel: audit: initializing netlink subsys (disabled)
Jan 31 02:31:35 np0005603663 kernel: audit: type=2000 audit(1769844694.169:1): state=initialized audit_enabled=0 res=1
Jan 31 02:31:35 np0005603663 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 02:31:35 np0005603663 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 02:31:35 np0005603663 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 02:31:35 np0005603663 kernel: cpuidle: using governor menu
Jan 31 02:31:35 np0005603663 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 02:31:35 np0005603663 kernel: PCI: Using configuration type 1 for base access
Jan 31 02:31:35 np0005603663 kernel: PCI: Using configuration type 1 for extended access
Jan 31 02:31:35 np0005603663 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 02:31:35 np0005603663 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 02:31:35 np0005603663 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 02:31:35 np0005603663 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 02:31:35 np0005603663 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 02:31:35 np0005603663 kernel: Demotion targets for Node 0: null
Jan 31 02:31:35 np0005603663 kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 02:31:35 np0005603663 kernel: ACPI: Added _OSI(Module Device)
Jan 31 02:31:35 np0005603663 kernel: ACPI: Added _OSI(Processor Device)
Jan 31 02:31:35 np0005603663 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 02:31:35 np0005603663 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 02:31:35 np0005603663 kernel: ACPI: Interpreter enabled
Jan 31 02:31:35 np0005603663 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 02:31:35 np0005603663 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 02:31:35 np0005603663 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 02:31:35 np0005603663 kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 02:31:35 np0005603663 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 02:31:35 np0005603663 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 02:31:35 np0005603663 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [3] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [4] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [5] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [6] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [7] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [8] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [9] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [10] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [11] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [12] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [13] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [14] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [15] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [16] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [17] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [18] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [19] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [20] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [21] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [22] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [23] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [24] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [25] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [26] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [27] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [28] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [29] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [30] registered
Jan 31 02:31:35 np0005603663 kernel: acpiphp: Slot [31] registered
Jan 31 02:31:35 np0005603663 kernel: PCI host bridge to bus 0000:00
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 02:31:35 np0005603663 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 02:31:35 np0005603663 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 02:31:35 np0005603663 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 02:31:35 np0005603663 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 02:31:35 np0005603663 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 02:31:35 np0005603663 kernel: iommu: Default domain type: Translated
Jan 31 02:31:35 np0005603663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 02:31:35 np0005603663 kernel: SCSI subsystem initialized
Jan 31 02:31:35 np0005603663 kernel: ACPI: bus type USB registered
Jan 31 02:31:35 np0005603663 kernel: usbcore: registered new interface driver usbfs
Jan 31 02:31:35 np0005603663 kernel: usbcore: registered new interface driver hub
Jan 31 02:31:35 np0005603663 kernel: usbcore: registered new device driver usb
Jan 31 02:31:35 np0005603663 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 02:31:35 np0005603663 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 02:31:35 np0005603663 kernel: PTP clock support registered
Jan 31 02:31:35 np0005603663 kernel: EDAC MC: Ver: 3.0.0
Jan 31 02:31:35 np0005603663 kernel: NetLabel: Initializing
Jan 31 02:31:35 np0005603663 kernel: NetLabel:  domain hash size = 128
Jan 31 02:31:35 np0005603663 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 02:31:35 np0005603663 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 02:31:35 np0005603663 kernel: PCI: Using ACPI for IRQ routing
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 02:31:35 np0005603663 kernel: vgaarb: loaded
Jan 31 02:31:35 np0005603663 kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 02:31:35 np0005603663 kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 02:31:35 np0005603663 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 02:31:35 np0005603663 kernel: pnp: PnP ACPI init
Jan 31 02:31:35 np0005603663 kernel: pnp: PnP ACPI: found 5 devices
Jan 31 02:31:35 np0005603663 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_INET protocol family
Jan 31 02:31:35 np0005603663 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 02:31:35 np0005603663 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_XDP protocol family
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 02:31:35 np0005603663 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 02:31:35 np0005603663 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 02:31:35 np0005603663 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 24569 usecs
Jan 31 02:31:35 np0005603663 kernel: PCI: CLS 0 bytes, default 64
Jan 31 02:31:35 np0005603663 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 02:31:35 np0005603663 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 02:31:35 np0005603663 kernel: ACPI: bus type thunderbolt registered
Jan 31 02:31:35 np0005603663 kernel: Trying to unpack rootfs image as initramfs...
Jan 31 02:31:35 np0005603663 kernel: Initialise system trusted keyrings
Jan 31 02:31:35 np0005603663 kernel: Key type blacklist registered
Jan 31 02:31:35 np0005603663 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 02:31:35 np0005603663 kernel: zbud: loaded
Jan 31 02:31:35 np0005603663 kernel: integrity: Platform Keyring initialized
Jan 31 02:31:35 np0005603663 kernel: integrity: Machine keyring initialized
Jan 31 02:31:35 np0005603663 kernel: Freeing initrd memory: 88000K
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_ALG protocol family
Jan 31 02:31:35 np0005603663 kernel: xor: automatically using best checksumming function   avx       
Jan 31 02:31:35 np0005603663 kernel: Key type asymmetric registered
Jan 31 02:31:35 np0005603663 kernel: Asymmetric key parser 'x509' registered
Jan 31 02:31:35 np0005603663 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 02:31:35 np0005603663 kernel: io scheduler mq-deadline registered
Jan 31 02:31:35 np0005603663 kernel: io scheduler kyber registered
Jan 31 02:31:35 np0005603663 kernel: io scheduler bfq registered
Jan 31 02:31:35 np0005603663 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 02:31:35 np0005603663 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 02:31:35 np0005603663 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 02:31:35 np0005603663 kernel: ACPI: button: Power Button [PWRF]
Jan 31 02:31:35 np0005603663 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 02:31:35 np0005603663 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 02:31:35 np0005603663 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 02:31:35 np0005603663 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 02:31:35 np0005603663 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 02:31:35 np0005603663 kernel: Non-volatile memory driver v1.3
Jan 31 02:31:35 np0005603663 kernel: rdac: device handler registered
Jan 31 02:31:35 np0005603663 kernel: hp_sw: device handler registered
Jan 31 02:31:35 np0005603663 kernel: emc: device handler registered
Jan 31 02:31:35 np0005603663 kernel: alua: device handler registered
Jan 31 02:31:35 np0005603663 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 02:31:35 np0005603663 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 02:31:35 np0005603663 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 02:31:35 np0005603663 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 02:31:35 np0005603663 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 02:31:35 np0005603663 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 02:31:35 np0005603663 kernel: usb usb1: Product: UHCI Host Controller
Jan 31 02:31:35 np0005603663 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 02:31:35 np0005603663 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 02:31:35 np0005603663 kernel: hub 1-0:1.0: USB hub found
Jan 31 02:31:35 np0005603663 kernel: hub 1-0:1.0: 2 ports detected
Jan 31 02:31:35 np0005603663 kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 02:31:35 np0005603663 kernel: usbserial: USB Serial support registered for generic
Jan 31 02:31:35 np0005603663 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 02:31:35 np0005603663 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 02:31:35 np0005603663 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 02:31:35 np0005603663 kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 02:31:35 np0005603663 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 02:31:35 np0005603663 kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 02:31:35 np0005603663 kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T07:31:34 UTC (1769844694)
Jan 31 02:31:35 np0005603663 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 02:31:35 np0005603663 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 02:31:35 np0005603663 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 02:31:35 np0005603663 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 02:31:35 np0005603663 kernel: usbcore: registered new interface driver usbhid
Jan 31 02:31:35 np0005603663 kernel: usbhid: USB HID core driver
Jan 31 02:31:35 np0005603663 kernel: drop_monitor: Initializing network drop monitor service
Jan 31 02:31:35 np0005603663 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 02:31:35 np0005603663 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 02:31:35 np0005603663 kernel: Initializing XFRM netlink socket
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_INET6 protocol family
Jan 31 02:31:35 np0005603663 kernel: Segment Routing with IPv6
Jan 31 02:31:35 np0005603663 kernel: NET: Registered PF_PACKET protocol family
Jan 31 02:31:35 np0005603663 kernel: mpls_gso: MPLS GSO support
Jan 31 02:31:35 np0005603663 kernel: IPI shorthand broadcast: enabled
Jan 31 02:31:35 np0005603663 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 02:31:35 np0005603663 kernel: AES CTR mode by8 optimization enabled
Jan 31 02:31:35 np0005603663 kernel: sched_clock: Marking stable (919010430, 139302830)->(1131291050, -72977790)
Jan 31 02:31:35 np0005603663 kernel: registered taskstats version 1
Jan 31 02:31:35 np0005603663 kernel: Loading compiled-in X.509 certificates
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 02:31:35 np0005603663 kernel: Demotion targets for Node 0: null
Jan 31 02:31:35 np0005603663 kernel: page_owner is disabled
Jan 31 02:31:35 np0005603663 kernel: Key type .fscrypt registered
Jan 31 02:31:35 np0005603663 kernel: Key type fscrypt-provisioning registered
Jan 31 02:31:35 np0005603663 kernel: Key type big_key registered
Jan 31 02:31:35 np0005603663 kernel: Key type encrypted registered
Jan 31 02:31:35 np0005603663 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 02:31:35 np0005603663 kernel: Loading compiled-in module X.509 certificates
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 02:31:35 np0005603663 kernel: ima: Allocated hash algorithm: sha256
Jan 31 02:31:35 np0005603663 kernel: ima: No architecture policies found
Jan 31 02:31:35 np0005603663 kernel: evm: Initialising EVM extended attributes:
Jan 31 02:31:35 np0005603663 kernel: evm: security.selinux
Jan 31 02:31:35 np0005603663 kernel: evm: security.SMACK64 (disabled)
Jan 31 02:31:35 np0005603663 kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 02:31:35 np0005603663 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 02:31:35 np0005603663 kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 02:31:35 np0005603663 kernel: evm: security.apparmor (disabled)
Jan 31 02:31:35 np0005603663 kernel: evm: security.ima
Jan 31 02:31:35 np0005603663 kernel: evm: security.capability
Jan 31 02:31:35 np0005603663 kernel: evm: HMAC attrs: 0x1
Jan 31 02:31:35 np0005603663 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 02:31:35 np0005603663 kernel: Running certificate verification RSA selftest
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 02:31:35 np0005603663 kernel: Running certificate verification ECDSA selftest
Jan 31 02:31:35 np0005603663 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 02:31:35 np0005603663 kernel: clk: Disabling unused clocks
Jan 31 02:31:35 np0005603663 kernel: Freeing unused decrypted memory: 2028K
Jan 31 02:31:35 np0005603663 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 02:31:35 np0005603663 kernel: Write protecting the kernel read-only data: 30720k
Jan 31 02:31:35 np0005603663 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 02:31:35 np0005603663 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 02:31:35 np0005603663 kernel: Run /init as init process
Jan 31 02:31:35 np0005603663 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 02:31:35 np0005603663 systemd: Detected virtualization kvm.
Jan 31 02:31:35 np0005603663 systemd: Detected architecture x86-64.
Jan 31 02:31:35 np0005603663 systemd: Running in initrd.
Jan 31 02:31:35 np0005603663 systemd: No hostname configured, using default hostname.
Jan 31 02:31:35 np0005603663 systemd: Hostname set to <localhost>.
Jan 31 02:31:35 np0005603663 systemd: Initializing machine ID from VM UUID.
Jan 31 02:31:35 np0005603663 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 02:31:35 np0005603663 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 02:31:35 np0005603663 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 02:31:35 np0005603663 kernel: usb 1-1: Manufacturer: QEMU
Jan 31 02:31:35 np0005603663 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 02:31:35 np0005603663 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 02:31:35 np0005603663 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 02:31:35 np0005603663 systemd: Queued start job for default target Initrd Default Target.
Jan 31 02:31:35 np0005603663 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 02:31:35 np0005603663 systemd: Reached target Local Encrypted Volumes.
Jan 31 02:31:35 np0005603663 systemd: Reached target Initrd /usr File System.
Jan 31 02:31:35 np0005603663 systemd: Reached target Local File Systems.
Jan 31 02:31:35 np0005603663 systemd: Reached target Path Units.
Jan 31 02:31:35 np0005603663 systemd: Reached target Slice Units.
Jan 31 02:31:35 np0005603663 systemd: Reached target Swaps.
Jan 31 02:31:35 np0005603663 systemd: Reached target Timer Units.
Jan 31 02:31:35 np0005603663 systemd: Listening on D-Bus System Message Bus Socket.
Jan 31 02:31:35 np0005603663 systemd: Listening on Journal Socket (/dev/log).
Jan 31 02:31:35 np0005603663 systemd: Listening on Journal Socket.
Jan 31 02:31:35 np0005603663 systemd: Listening on udev Control Socket.
Jan 31 02:31:35 np0005603663 systemd: Listening on udev Kernel Socket.
Jan 31 02:31:35 np0005603663 systemd: Reached target Socket Units.
Jan 31 02:31:35 np0005603663 systemd: Starting Create List of Static Device Nodes...
Jan 31 02:31:35 np0005603663 systemd: Starting Journal Service...
Jan 31 02:31:35 np0005603663 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 02:31:35 np0005603663 systemd: Starting Apply Kernel Variables...
Jan 31 02:31:35 np0005603663 systemd: Starting Create System Users...
Jan 31 02:31:35 np0005603663 systemd: Starting Setup Virtual Console...
Jan 31 02:31:35 np0005603663 systemd: Finished Create List of Static Device Nodes.
Jan 31 02:31:35 np0005603663 systemd: Finished Apply Kernel Variables.
Jan 31 02:31:35 np0005603663 systemd: Finished Create System Users.
Jan 31 02:31:35 np0005603663 systemd-journald[305]: Journal started
Jan 31 02:31:35 np0005603663 systemd-journald[305]: Runtime Journal (/run/log/journal/2848852e0b6443df9df31c9bd96fb83b) is 8.0M, max 153.6M, 145.6M free.
Jan 31 02:31:35 np0005603663 systemd-sysusers[309]: Creating group 'users' with GID 100.
Jan 31 02:31:35 np0005603663 systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Jan 31 02:31:35 np0005603663 systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 02:31:35 np0005603663 systemd: Started Journal Service.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 02:31:35 np0005603663 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 02:31:35 np0005603663 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 02:31:35 np0005603663 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 02:31:35 np0005603663 systemd[1]: Finished Setup Virtual Console.
Jan 31 02:31:35 np0005603663 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting dracut cmdline hook...
Jan 31 02:31:35 np0005603663 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 02:31:35 np0005603663 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 02:31:35 np0005603663 systemd[1]: Finished dracut cmdline hook.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting dracut pre-udev hook...
Jan 31 02:31:35 np0005603663 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 02:31:35 np0005603663 kernel: device-mapper: uevent: version 1.0.3
Jan 31 02:31:35 np0005603663 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 02:31:35 np0005603663 kernel: RPC: Registered named UNIX socket transport module.
Jan 31 02:31:35 np0005603663 kernel: RPC: Registered udp transport module.
Jan 31 02:31:35 np0005603663 kernel: RPC: Registered tcp transport module.
Jan 31 02:31:35 np0005603663 kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 02:31:35 np0005603663 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 02:31:35 np0005603663 rpc.statd[441]: Version 2.5.4 starting
Jan 31 02:31:35 np0005603663 rpc.statd[441]: Initializing NSM state
Jan 31 02:31:35 np0005603663 rpc.idmapd[446]: Setting log level to 0
Jan 31 02:31:35 np0005603663 systemd[1]: Finished dracut pre-udev hook.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 02:31:35 np0005603663 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 02:31:35 np0005603663 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting dracut pre-trigger hook...
Jan 31 02:31:35 np0005603663 systemd[1]: Finished dracut pre-trigger hook.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting Coldplug All udev Devices...
Jan 31 02:31:35 np0005603663 systemd[1]: Created slice Slice /system/modprobe.
Jan 31 02:31:35 np0005603663 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 02:31:35 np0005603663 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 02:31:35 np0005603663 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 02:31:35 np0005603663 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 02:31:35 np0005603663 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 02:31:35 np0005603663 systemd[1]: Reached target Network.
Jan 31 02:31:35 np0005603663 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 02:31:35 np0005603663 systemd[1]: Starting dracut initqueue hook...
Jan 31 02:31:35 np0005603663 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 02:31:35 np0005603663 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 02:31:35 np0005603663 kernel: vda: vda1
Jan 31 02:31:35 np0005603663 systemd-udevd[476]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:31:35 np0005603663 kernel: scsi host0: ata_piix
Jan 31 02:31:35 np0005603663 kernel: scsi host1: ata_piix
Jan 31 02:31:35 np0005603663 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 02:31:35 np0005603663 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 02:31:36 np0005603663 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Initrd Root Device.
Jan 31 02:31:36 np0005603663 systemd[1]: Mounting Kernel Configuration File System...
Jan 31 02:31:36 np0005603663 systemd[1]: Mounted Kernel Configuration File System.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target System Initialization.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Basic System.
Jan 31 02:31:36 np0005603663 kernel: ata1: found unknown device (class 0)
Jan 31 02:31:36 np0005603663 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 02:31:36 np0005603663 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 02:31:36 np0005603663 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 02:31:36 np0005603663 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 02:31:36 np0005603663 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 02:31:36 np0005603663 systemd[1]: Finished dracut initqueue hook.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Remote File Systems.
Jan 31 02:31:36 np0005603663 systemd[1]: Starting dracut pre-mount hook...
Jan 31 02:31:36 np0005603663 systemd[1]: Finished dracut pre-mount hook.
Jan 31 02:31:36 np0005603663 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 02:31:36 np0005603663 systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 02:31:36 np0005603663 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 02:31:36 np0005603663 systemd[1]: Mounting /sysroot...
Jan 31 02:31:36 np0005603663 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 02:31:36 np0005603663 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 02:31:36 np0005603663 kernel: XFS (vda1): Ending clean mount
Jan 31 02:31:36 np0005603663 systemd[1]: Mounted /sysroot.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Initrd Root File System.
Jan 31 02:31:36 np0005603663 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 02:31:36 np0005603663 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 02:31:36 np0005603663 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Initrd File Systems.
Jan 31 02:31:36 np0005603663 systemd[1]: Reached target Initrd Default Target.
Jan 31 02:31:36 np0005603663 systemd[1]: Starting dracut mount hook...
Jan 31 02:31:36 np0005603663 systemd[1]: Finished dracut mount hook.
Jan 31 02:31:36 np0005603663 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 02:31:37 np0005603663 rpc.idmapd[446]: exiting on signal 15
Jan 31 02:31:37 np0005603663 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 02:31:37 np0005603663 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Network.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Timer Units.
Jan 31 02:31:37 np0005603663 systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Initrd Default Target.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Basic System.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Initrd Root Device.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Initrd /usr File System.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Path Units.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Remote File Systems.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Slice Units.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Socket Units.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target System Initialization.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Local File Systems.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Swaps.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut mount hook.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut pre-mount hook.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut initqueue hook.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Setup Virtual Console.
Jan 31 02:31:37 np0005603663 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Closed udev Control Socket.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Closed udev Kernel Socket.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut pre-udev hook.
Jan 31 02:31:37 np0005603663 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped dracut cmdline hook.
Jan 31 02:31:37 np0005603663 systemd[1]: Starting Cleanup udev Database...
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 02:31:37 np0005603663 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 02:31:37 np0005603663 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Stopped Create System Users.
Jan 31 02:31:37 np0005603663 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 02:31:37 np0005603663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 02:31:37 np0005603663 systemd[1]: Finished Cleanup udev Database.
Jan 31 02:31:37 np0005603663 systemd[1]: Reached target Switch Root.
Jan 31 02:31:37 np0005603663 systemd[1]: Starting Switch Root...
Jan 31 02:31:37 np0005603663 systemd[1]: Switching root.
Jan 31 02:31:37 np0005603663 systemd-journald[305]: Journal stopped
Jan 31 02:31:38 np0005603663 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 31 02:31:38 np0005603663 kernel: audit: type=1404 audit(1769844697.342:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:31:38 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:31:38 np0005603663 kernel: audit: type=1403 audit(1769844697.478:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 02:31:38 np0005603663 systemd: Successfully loaded SELinux policy in 142.036ms.
Jan 31 02:31:38 np0005603663 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.829ms.
Jan 31 02:31:38 np0005603663 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 02:31:38 np0005603663 systemd: Detected virtualization kvm.
Jan 31 02:31:38 np0005603663 systemd: Detected architecture x86-64.
Jan 31 02:31:38 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:31:38 np0005603663 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd: Stopped Switch Root.
Jan 31 02:31:38 np0005603663 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 02:31:38 np0005603663 systemd: Created slice Slice /system/getty.
Jan 31 02:31:38 np0005603663 systemd: Created slice Slice /system/serial-getty.
Jan 31 02:31:38 np0005603663 systemd: Created slice Slice /system/sshd-keygen.
Jan 31 02:31:38 np0005603663 systemd: Created slice User and Session Slice.
Jan 31 02:31:38 np0005603663 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 02:31:38 np0005603663 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 31 02:31:38 np0005603663 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 02:31:38 np0005603663 systemd: Reached target Local Encrypted Volumes.
Jan 31 02:31:38 np0005603663 systemd: Stopped target Switch Root.
Jan 31 02:31:38 np0005603663 systemd: Stopped target Initrd File Systems.
Jan 31 02:31:38 np0005603663 systemd: Stopped target Initrd Root File System.
Jan 31 02:31:38 np0005603663 systemd: Reached target Local Integrity Protected Volumes.
Jan 31 02:31:38 np0005603663 systemd: Reached target Path Units.
Jan 31 02:31:38 np0005603663 systemd: Reached target rpc_pipefs.target.
Jan 31 02:31:38 np0005603663 systemd: Reached target Slice Units.
Jan 31 02:31:38 np0005603663 systemd: Reached target Swaps.
Jan 31 02:31:38 np0005603663 systemd: Reached target Local Verity Protected Volumes.
Jan 31 02:31:38 np0005603663 systemd: Listening on RPCbind Server Activation Socket.
Jan 31 02:31:38 np0005603663 systemd: Reached target RPC Port Mapper.
Jan 31 02:31:38 np0005603663 systemd: Listening on Process Core Dump Socket.
Jan 31 02:31:38 np0005603663 systemd: Listening on initctl Compatibility Named Pipe.
Jan 31 02:31:38 np0005603663 systemd: Listening on udev Control Socket.
Jan 31 02:31:38 np0005603663 systemd: Listening on udev Kernel Socket.
Jan 31 02:31:38 np0005603663 systemd: Mounting Huge Pages File System...
Jan 31 02:31:38 np0005603663 systemd: Mounting POSIX Message Queue File System...
Jan 31 02:31:38 np0005603663 systemd: Mounting Kernel Debug File System...
Jan 31 02:31:38 np0005603663 systemd: Mounting Kernel Trace File System...
Jan 31 02:31:38 np0005603663 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 02:31:38 np0005603663 systemd: Starting Create List of Static Device Nodes...
Jan 31 02:31:38 np0005603663 systemd: Starting Load Kernel Module configfs...
Jan 31 02:31:38 np0005603663 systemd: Starting Load Kernel Module drm...
Jan 31 02:31:38 np0005603663 systemd: Starting Load Kernel Module efi_pstore...
Jan 31 02:31:38 np0005603663 systemd: Starting Load Kernel Module fuse...
Jan 31 02:31:38 np0005603663 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 02:31:38 np0005603663 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd: Stopped File System Check on Root Device.
Jan 31 02:31:38 np0005603663 systemd: Stopped Journal Service.
Jan 31 02:31:38 np0005603663 systemd: Starting Journal Service...
Jan 31 02:31:38 np0005603663 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 02:31:38 np0005603663 systemd: Starting Generate network units from Kernel command line...
Jan 31 02:31:38 np0005603663 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 02:31:38 np0005603663 systemd: Starting Remount Root and Kernel File Systems...
Jan 31 02:31:38 np0005603663 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 02:31:38 np0005603663 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:38 np0005603663 systemd: Starting Apply Kernel Variables...
Jan 31 02:31:38 np0005603663 kernel: fuse: init (API version 7.37)
Jan 31 02:31:38 np0005603663 systemd: Starting Coldplug All udev Devices...
Jan 31 02:31:38 np0005603663 systemd: Mounted Huge Pages File System.
Jan 31 02:31:38 np0005603663 systemd: Mounted POSIX Message Queue File System.
Jan 31 02:31:38 np0005603663 systemd: Mounted Kernel Debug File System.
Jan 31 02:31:38 np0005603663 systemd: Mounted Kernel Trace File System.
Jan 31 02:31:38 np0005603663 systemd: Finished Create List of Static Device Nodes.
Jan 31 02:31:38 np0005603663 systemd: modprobe@configfs.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd: Finished Load Kernel Module configfs.
Jan 31 02:31:38 np0005603663 kernel: ACPI: bus type drm_connector registered
Jan 31 02:31:38 np0005603663 systemd-journald[682]: Journal started
Jan 31 02:31:38 np0005603663 systemd-journald[682]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 02:31:38 np0005603663 systemd[1]: Queued start job for default target Multi-User System.
Jan 31 02:31:38 np0005603663 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd: Started Journal Service.
Jan 31 02:31:38 np0005603663 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Load Kernel Module drm.
Jan 31 02:31:38 np0005603663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 02:31:38 np0005603663 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Load Kernel Module fuse.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Apply Kernel Variables.
Jan 31 02:31:38 np0005603663 systemd[1]: Mounting FUSE Control File System...
Jan 31 02:31:38 np0005603663 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Rebuild Hardware Database...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 02:31:38 np0005603663 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Create System Users...
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 02:31:38 np0005603663 systemd[1]: Mounted FUSE Control File System.
Jan 31 02:31:38 np0005603663 systemd-journald[682]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 02:31:38 np0005603663 systemd-journald[682]: Received client request to flush runtime journal.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 02:31:38 np0005603663 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Create System Users.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target Local File Systems.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 02:31:38 np0005603663 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 02:31:38 np0005603663 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 02:31:38 np0005603663 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 02:31:38 np0005603663 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 02:31:38 np0005603663 bootctl[700]: Couldn't find EFI system partition, skipping.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Security Auditing Service...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting RPC Bind...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 02:31:38 np0005603663 auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 02:31:38 np0005603663 auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 02:31:38 np0005603663 systemd[1]: Started RPC Bind.
Jan 31 02:31:38 np0005603663 augenrules[711]: /sbin/augenrules: No change
Jan 31 02:31:38 np0005603663 augenrules[726]: No rules
Jan 31 02:31:38 np0005603663 augenrules[726]: enabled 1
Jan 31 02:31:38 np0005603663 augenrules[726]: failure 1
Jan 31 02:31:38 np0005603663 augenrules[726]: pid 706
Jan 31 02:31:38 np0005603663 augenrules[726]: rate_limit 0
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_limit 8192
Jan 31 02:31:38 np0005603663 augenrules[726]: lost 0
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog 4
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_wait_time 60000
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_wait_time_actual 0
Jan 31 02:31:38 np0005603663 augenrules[726]: enabled 1
Jan 31 02:31:38 np0005603663 augenrules[726]: failure 1
Jan 31 02:31:38 np0005603663 augenrules[726]: pid 706
Jan 31 02:31:38 np0005603663 augenrules[726]: rate_limit 0
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_limit 8192
Jan 31 02:31:38 np0005603663 augenrules[726]: lost 0
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog 4
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_wait_time 60000
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_wait_time_actual 0
Jan 31 02:31:38 np0005603663 augenrules[726]: enabled 1
Jan 31 02:31:38 np0005603663 augenrules[726]: failure 1
Jan 31 02:31:38 np0005603663 augenrules[726]: pid 706
Jan 31 02:31:38 np0005603663 augenrules[726]: rate_limit 0
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_limit 8192
Jan 31 02:31:38 np0005603663 augenrules[726]: lost 0
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog 3
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_wait_time 60000
Jan 31 02:31:38 np0005603663 augenrules[726]: backlog_wait_time_actual 0
Jan 31 02:31:38 np0005603663 systemd[1]: Started Security Auditing Service.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Rebuild Hardware Database.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Update is Completed...
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Update is Completed.
Jan 31 02:31:38 np0005603663 systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 02:31:38 np0005603663 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target System Initialization.
Jan 31 02:31:38 np0005603663 systemd[1]: Started dnf makecache --timer.
Jan 31 02:31:38 np0005603663 systemd[1]: Started Daily rotation of log files.
Jan 31 02:31:38 np0005603663 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target Timer Units.
Jan 31 02:31:38 np0005603663 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 02:31:38 np0005603663 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target Socket Units.
Jan 31 02:31:38 np0005603663 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting D-Bus System Message Bus...
Jan 31 02:31:38 np0005603663 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 02:31:38 np0005603663 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 02:31:38 np0005603663 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 02:31:38 np0005603663 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 02:31:38 np0005603663 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 02:31:38 np0005603663 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 02:31:38 np0005603663 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 02:31:38 np0005603663 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 02:31:38 np0005603663 systemd[1]: Started D-Bus System Message Bus.
Jan 31 02:31:38 np0005603663 dbus-broker-lau[771]: Ready
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target Basic System.
Jan 31 02:31:38 np0005603663 systemd[1]: Starting NTP client/server...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 02:31:38 np0005603663 systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 02:31:38 np0005603663 systemd[1]: Started irqbalance daemon.
Jan 31 02:31:38 np0005603663 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 02:31:38 np0005603663 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:31:38 np0005603663 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:31:38 np0005603663 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target sshd-keygen.target.
Jan 31 02:31:38 np0005603663 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 02:31:38 np0005603663 systemd[1]: Reached target User and Group Name Lookups.
Jan 31 02:31:38 np0005603663 chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 02:31:38 np0005603663 chronyd[792]: Loaded 0 symmetric keys
Jan 31 02:31:38 np0005603663 chronyd[792]: Using right/UTC timezone to obtain leap second data
Jan 31 02:31:38 np0005603663 chronyd[792]: Loaded seccomp filter (level 2)
Jan 31 02:31:39 np0005603663 systemd[1]: Starting User Login Management...
Jan 31 02:31:39 np0005603663 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 02:31:39 np0005603663 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 02:31:39 np0005603663 kernel: Console: switching to colour dummy device 80x25
Jan 31 02:31:39 np0005603663 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 02:31:39 np0005603663 kernel: [drm] features: -context_init
Jan 31 02:31:39 np0005603663 systemd[1]: Started NTP client/server.
Jan 31 02:31:39 np0005603663 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 02:31:39 np0005603663 kernel: [drm] number of scanouts: 1
Jan 31 02:31:39 np0005603663 kernel: [drm] number of cap sets: 0
Jan 31 02:31:39 np0005603663 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 02:31:39 np0005603663 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 02:31:39 np0005603663 kernel: Console: switching to colour frame buffer device 128x48
Jan 31 02:31:39 np0005603663 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 02:31:39 np0005603663 kernel: kvm_amd: TSC scaling supported
Jan 31 02:31:39 np0005603663 kernel: kvm_amd: Nested Virtualization enabled
Jan 31 02:31:39 np0005603663 kernel: kvm_amd: Nested Paging enabled
Jan 31 02:31:39 np0005603663 kernel: kvm_amd: LBR virtualization supported
Jan 31 02:31:39 np0005603663 systemd-logind[793]: New seat seat0.
Jan 31 02:31:39 np0005603663 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 02:31:39 np0005603663 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 02:31:39 np0005603663 systemd[1]: Started User Login Management.
Jan 31 02:31:39 np0005603663 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 02:31:39 np0005603663 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 02:31:39 np0005603663 iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Jan 31 02:31:39 np0005603663 systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 02:31:39 np0005603663 cloud-init[843]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 07:31:39 +0000. Up 6.09 seconds.
Jan 31 02:31:39 np0005603663 systemd[1]: run-cloud\x2dinit-tmp-tmpd_m9_jnx.mount: Deactivated successfully.
Jan 31 02:31:40 np0005603663 systemd[1]: Starting Hostname Service...
Jan 31 02:31:40 np0005603663 systemd[1]: Started Hostname Service.
Jan 31 02:31:40 np0005603663 systemd-hostnamed[857]: Hostname set to <np0005603663.novalocal> (static)
Jan 31 02:31:40 np0005603663 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 02:31:40 np0005603663 systemd[1]: Reached target Preparation for Network.
Jan 31 02:31:40 np0005603663 systemd[1]: Starting Network Manager...
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.2763] NetworkManager (version 1.54.3-2.el9) is starting... (boot:46d0e983-b0c8-47a0-b578-409408b2d808)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.2768] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.2945] manager[0x55bbc3499000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.2990] hostname: hostname: using hostnamed
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.2991] hostname: static hostname changed from (none) to "np0005603663.novalocal"
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.2999] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3119] manager[0x55bbc3499000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3120] manager[0x55bbc3499000]: rfkill: WWAN hardware radio set enabled
Jan 31 02:31:40 np0005603663 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3215] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3215] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3216] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3216] manager: Networking is enabled by state file
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3218] settings: Loaded settings plugin: keyfile (internal)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3247] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3277] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3289] dhcp: init: Using DHCP client 'internal'
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3294] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3304] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3316] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3326] device (lo): Activation: starting connection 'lo' (4e410dfc-e55f-4386-a962-128f9b1580ba)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3332] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3334] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3362] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3365] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3367] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3369] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3370] device (eth0): carrier: link connected
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3371] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3376] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3381] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3384] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3385] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:31:40 np0005603663 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3387] manager: NetworkManager state is now CONNECTING
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3389] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3394] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3396] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:31:40 np0005603663 systemd[1]: Started Network Manager.
Jan 31 02:31:40 np0005603663 systemd[1]: Reached target Network.
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3452] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3459] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3481] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:31:40 np0005603663 systemd[1]: Starting Network Manager Wait Online...
Jan 31 02:31:40 np0005603663 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3567] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3570] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 02:31:40 np0005603663 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3609] device (lo): Activation: successful, device activated.
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3622] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3625] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3631] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3635] device (eth0): Activation: successful, device activated.
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3644] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 02:31:40 np0005603663 NetworkManager[861]: <info>  [1769844700.3648] manager: startup complete
Jan 31 02:31:40 np0005603663 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 02:31:40 np0005603663 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 02:31:40 np0005603663 systemd[1]: Reached target NFS client services.
Jan 31 02:31:40 np0005603663 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 02:31:40 np0005603663 systemd[1]: Reached target Remote File Systems.
Jan 31 02:31:40 np0005603663 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 02:31:40 np0005603663 systemd[1]: Finished Network Manager Wait Online.
Jan 31 02:31:40 np0005603663 systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 02:31:40 np0005603663 cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 07:31:40 +0000. Up 7.05 seconds.
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.23         | 255.255.255.0 | global | fa:16:3e:10:2a:3d |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe10:2a3d/64 |       .       |  link  | fa:16:3e:10:2a:3d |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 02:31:40 np0005603663 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 02:31:42 np0005603663 cloud-init[924]: Generating public/private rsa key pair.
Jan 31 02:31:42 np0005603663 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 02:31:42 np0005603663 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 02:31:42 np0005603663 cloud-init[924]: The key fingerprint is:
Jan 31 02:31:42 np0005603663 cloud-init[924]: SHA256:HxoPDMOYF4QbC5R2UOcIKdXrqzmMIxDBD+XL6CbY91c root@np0005603663.novalocal
Jan 31 02:31:42 np0005603663 cloud-init[924]: The key's randomart image is:
Jan 31 02:31:42 np0005603663 cloud-init[924]: +---[RSA 3072]----+
Jan 31 02:31:42 np0005603663 cloud-init[924]: |.o*B.o+          |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |oo*.=B .         |
Jan 31 02:31:42 np0005603663 cloud-init[924]: | +o++=*          |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |. o.=. +         |
Jan 31 02:31:42 np0005603663 cloud-init[924]: | o +    S .      |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |+.  .    *E.     |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |+=. ..  ..o      |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |* oo..  .        |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |..oo  ..         |
Jan 31 02:31:42 np0005603663 cloud-init[924]: +----[SHA256]-----+
Jan 31 02:31:42 np0005603663 cloud-init[924]: Generating public/private ecdsa key pair.
Jan 31 02:31:42 np0005603663 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 02:31:42 np0005603663 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 02:31:42 np0005603663 cloud-init[924]: The key fingerprint is:
Jan 31 02:31:42 np0005603663 cloud-init[924]: SHA256:wga2K2W/JZ6ryQdN2/3oe+M0W2+oTRsn7/Xf/jg38Jg root@np0005603663.novalocal
Jan 31 02:31:42 np0005603663 cloud-init[924]: The key's randomart image is:
Jan 31 02:31:42 np0005603663 cloud-init[924]: +---[ECDSA 256]---+
Jan 31 02:31:42 np0005603663 cloud-init[924]: |                 |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |                 |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |    o            |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |   . +.          |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |    +o+oS.       |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |   o.+o.. .  .   |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |  . ..o .  oo X.o|
Jan 31 02:31:42 np0005603663 cloud-init[924]: |   o o.=  ..+Eo@*|
Jan 31 02:31:42 np0005603663 cloud-init[924]: |    +o=. .o++o+*&|
Jan 31 02:31:42 np0005603663 cloud-init[924]: +----[SHA256]-----+
Jan 31 02:31:42 np0005603663 cloud-init[924]: Generating public/private ed25519 key pair.
Jan 31 02:31:42 np0005603663 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 02:31:42 np0005603663 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 02:31:42 np0005603663 cloud-init[924]: The key fingerprint is:
Jan 31 02:31:42 np0005603663 cloud-init[924]: SHA256:TWn5HSBHJfh2roAc8eqbKUvdpViNa+gGJbMu7m9qWqs root@np0005603663.novalocal
Jan 31 02:31:42 np0005603663 cloud-init[924]: The key's randomart image is:
Jan 31 02:31:42 np0005603663 cloud-init[924]: +--[ED25519 256]--+
Jan 31 02:31:42 np0005603663 cloud-init[924]: |          .o=..  |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |        . .= o   |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |         o=.  .  |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |      o o+.+o... |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |       *S++.+o.  |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |      o.+=.+  .  |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |    ...o+ =. .   |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |   .oo+.o+  .    |
Jan 31 02:31:42 np0005603663 cloud-init[924]: |  E*==o+=.       |
Jan 31 02:31:42 np0005603663 cloud-init[924]: +----[SHA256]-----+
Jan 31 02:31:42 np0005603663 systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 02:31:42 np0005603663 systemd[1]: Reached target Cloud-config availability.
Jan 31 02:31:42 np0005603663 systemd[1]: Reached target Network is Online.
Jan 31 02:31:42 np0005603663 systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 02:31:42 np0005603663 systemd[1]: Starting Crash recovery kernel arming...
Jan 31 02:31:42 np0005603663 systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 02:31:42 np0005603663 systemd[1]: Starting System Logging Service...
Jan 31 02:31:42 np0005603663 systemd[1]: Starting OpenSSH server daemon...
Jan 31 02:31:42 np0005603663 sm-notify[1006]: Version 2.5.4 starting
Jan 31 02:31:42 np0005603663 systemd[1]: Starting Permit User Sessions...
Jan 31 02:31:42 np0005603663 systemd[1]: Started Notify NFS peers of a restart.
Jan 31 02:31:42 np0005603663 systemd[1]: Finished Permit User Sessions.
Jan 31 02:31:42 np0005603663 systemd[1]: Started Command Scheduler.
Jan 31 02:31:42 np0005603663 systemd[1]: Started Getty on tty1.
Jan 31 02:31:42 np0005603663 systemd[1]: Started Serial Getty on ttyS0.
Jan 31 02:31:42 np0005603663 systemd[1]: Reached target Login Prompts.
Jan 31 02:31:42 np0005603663 systemd[1]: Started OpenSSH server daemon.
Jan 31 02:31:42 np0005603663 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 31 02:31:42 np0005603663 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 02:31:42 np0005603663 systemd[1]: Started System Logging Service.
Jan 31 02:31:42 np0005603663 systemd[1]: Reached target Multi-User System.
Jan 31 02:31:42 np0005603663 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 02:31:42 np0005603663 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 02:31:42 np0005603663 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 02:31:42 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:31:42 np0005603663 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Jan 31 02:31:42 np0005603663 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 02:31:42 np0005603663 cloud-init[1131]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 07:31:42 +0000. Up 8.88 seconds.
Jan 31 02:31:42 np0005603663 systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 02:31:42 np0005603663 systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 02:31:42 np0005603663 dracut[1285]: dracut-057-102.git20250818.el9
Jan 31 02:31:42 np0005603663 cloud-init[1303]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 07:31:42 +0000. Up 9.26 seconds.
Jan 31 02:31:42 np0005603663 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 02:31:42 np0005603663 cloud-init[1320]: #############################################################
Jan 31 02:31:42 np0005603663 cloud-init[1324]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 02:31:43 np0005603663 cloud-init[1331]: 256 SHA256:wga2K2W/JZ6ryQdN2/3oe+M0W2+oTRsn7/Xf/jg38Jg root@np0005603663.novalocal (ECDSA)
Jan 31 02:31:43 np0005603663 cloud-init[1337]: 256 SHA256:TWn5HSBHJfh2roAc8eqbKUvdpViNa+gGJbMu7m9qWqs root@np0005603663.novalocal (ED25519)
Jan 31 02:31:43 np0005603663 cloud-init[1344]: 3072 SHA256:HxoPDMOYF4QbC5R2UOcIKdXrqzmMIxDBD+XL6CbY91c root@np0005603663.novalocal (RSA)
Jan 31 02:31:43 np0005603663 cloud-init[1346]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 02:31:43 np0005603663 cloud-init[1351]: #############################################################
Jan 31 02:31:43 np0005603663 cloud-init[1303]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 07:31:43 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.44 seconds
Jan 31 02:31:43 np0005603663 systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 02:31:43 np0005603663 systemd[1]: Reached target Cloud-init target.
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 02:31:43 np0005603663 dracut[1287]: memstrack is not available
Jan 31 02:31:43 np0005603663 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 02:31:44 np0005603663 dracut[1287]: memstrack is not available
Jan 31 02:31:44 np0005603663 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 02:31:44 np0005603663 dracut[1287]: *** Including module: systemd ***
Jan 31 02:31:44 np0005603663 dracut[1287]: *** Including module: fips ***
Jan 31 02:31:44 np0005603663 dracut[1287]: *** Including module: systemd-initrd ***
Jan 31 02:31:44 np0005603663 dracut[1287]: *** Including module: i18n ***
Jan 31 02:31:45 np0005603663 dracut[1287]: *** Including module: drm ***
Jan 31 02:31:45 np0005603663 chronyd[792]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Jan 31 02:31:45 np0005603663 chronyd[792]: System clock TAI offset set to 37 seconds
Jan 31 02:31:45 np0005603663 dracut[1287]: *** Including module: prefixdevname ***
Jan 31 02:31:45 np0005603663 dracut[1287]: *** Including module: kernel-modules ***
Jan 31 02:31:45 np0005603663 kernel: block vda: the capability attribute has been deprecated.
Jan 31 02:31:45 np0005603663 dracut[1287]: *** Including module: kernel-modules-extra ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: qemu ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: fstab-sys ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: rootfs-block ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: terminfo ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: udev-rules ***
Jan 31 02:31:46 np0005603663 dracut[1287]: Skipping udev rule: 91-permissions.rules
Jan 31 02:31:46 np0005603663 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: virtiofs ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: dracut-systemd ***
Jan 31 02:31:46 np0005603663 chronyd[792]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: usrmount ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: base ***
Jan 31 02:31:46 np0005603663 dracut[1287]: *** Including module: fs-lib ***
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Including module: kdumpbase ***
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 02:31:47 np0005603663 dracut[1287]:  microcode_ctl module: mangling fw_dir
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 02:31:47 np0005603663 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Including module: openssl ***
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Including module: shutdown ***
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Including module: squash ***
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Including modules done ***
Jan 31 02:31:47 np0005603663 dracut[1287]: *** Installing kernel module dependencies ***
Jan 31 02:31:48 np0005603663 dracut[1287]: *** Installing kernel module dependencies done ***
Jan 31 02:31:48 np0005603663 dracut[1287]: *** Resolving executable dependencies ***
Jan 31 02:31:49 np0005603663 irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 02:31:49 np0005603663 irqbalance[789]: IRQ 25 affinity is now unmanaged
Jan 31 02:31:49 np0005603663 irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 02:31:49 np0005603663 irqbalance[789]: IRQ 31 affinity is now unmanaged
Jan 31 02:31:49 np0005603663 irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 02:31:49 np0005603663 irqbalance[789]: IRQ 28 affinity is now unmanaged
Jan 31 02:31:49 np0005603663 irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 02:31:49 np0005603663 irqbalance[789]: IRQ 32 affinity is now unmanaged
Jan 31 02:31:49 np0005603663 irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 02:31:49 np0005603663 irqbalance[789]: IRQ 30 affinity is now unmanaged
Jan 31 02:31:49 np0005603663 irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 02:31:49 np0005603663 irqbalance[789]: IRQ 29 affinity is now unmanaged
Jan 31 02:31:49 np0005603663 dracut[1287]: *** Resolving executable dependencies done ***
Jan 31 02:31:49 np0005603663 dracut[1287]: *** Generating early-microcode cpio image ***
Jan 31 02:31:49 np0005603663 dracut[1287]: *** Store current command line parameters ***
Jan 31 02:31:49 np0005603663 dracut[1287]: Stored kernel commandline:
Jan 31 02:31:49 np0005603663 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Jan 31 02:31:50 np0005603663 dracut[1287]: *** Install squash loader ***
Jan 31 02:31:50 np0005603663 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:31:50 np0005603663 dracut[1287]: *** Squashing the files inside the initramfs ***
Jan 31 02:31:51 np0005603663 dracut[1287]: *** Squashing the files inside the initramfs done ***
Jan 31 02:31:51 np0005603663 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 02:31:51 np0005603663 dracut[1287]: *** Hardlinking files ***
Jan 31 02:31:52 np0005603663 dracut[1287]: *** Hardlinking files done ***
Jan 31 02:31:52 np0005603663 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 02:31:52 np0005603663 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Jan 31 02:31:52 np0005603663 kdumpctl[1017]: kdump: Starting kdump: [OK]
Jan 31 02:31:52 np0005603663 systemd[1]: Finished Crash recovery kernel arming.
Jan 31 02:31:52 np0005603663 systemd[1]: Startup finished in 1.224s (kernel) + 2.474s (initrd) + 15.620s (userspace) = 19.320s.
Jan 31 02:32:10 np0005603663 systemd[1]: Created slice User Slice of UID 1000.
Jan 31 02:32:10 np0005603663 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 02:32:10 np0005603663 systemd-logind[793]: New session 1 of user zuul.
Jan 31 02:32:10 np0005603663 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 02:32:10 np0005603663 systemd[1]: Starting User Manager for UID 1000...
Jan 31 02:32:10 np0005603663 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 02:32:10 np0005603663 systemd[4307]: Queued start job for default target Main User Target.
Jan 31 02:32:10 np0005603663 systemd[4307]: Created slice User Application Slice.
Jan 31 02:32:10 np0005603663 systemd[4307]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:32:10 np0005603663 systemd[4307]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:32:10 np0005603663 systemd[4307]: Reached target Paths.
Jan 31 02:32:10 np0005603663 systemd[4307]: Reached target Timers.
Jan 31 02:32:10 np0005603663 systemd[4307]: Starting D-Bus User Message Bus Socket...
Jan 31 02:32:10 np0005603663 systemd[4307]: Starting Create User's Volatile Files and Directories...
Jan 31 02:32:10 np0005603663 systemd[4307]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:32:10 np0005603663 systemd[4307]: Reached target Sockets.
Jan 31 02:32:10 np0005603663 systemd[4307]: Finished Create User's Volatile Files and Directories.
Jan 31 02:32:10 np0005603663 systemd[4307]: Reached target Basic System.
Jan 31 02:32:10 np0005603663 systemd[4307]: Reached target Main User Target.
Jan 31 02:32:10 np0005603663 systemd[4307]: Startup finished in 220ms.
Jan 31 02:32:10 np0005603663 systemd[1]: Started User Manager for UID 1000.
Jan 31 02:32:10 np0005603663 systemd[1]: Started Session 1 of User zuul.
Jan 31 02:32:11 np0005603663 python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:32:13 np0005603663 python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:32:19 np0005603663 python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:32:20 np0005603663 python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 02:32:22 np0005603663 python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Nk/3X7ElbK5UI3l4t5jdoE6KuIlCQvu2c4Ei9SOHuuE9jliuN7rH2FkoHI1foCUeuqIIbuVzRH47hK+pxsFN5aoANBwNDx1IwijCiyH4vm8xmIQQzFRxMcIchAX5xdujjhf3pqG0A9IW2WVdYY2aFX2RA0L7I2TgYUbHrrGO/Z/9EUolfHRtmZIGhQgTzUTv7hJNTs24+mTQctJVNQgt41VaDc+wjjcfbiqFy4OdGWxdxXTNnQY/NMkp/X72NSJtBMNl2a0AWJivbPkO9V0q5fAM8zrcLDTJkPuMScptn+k3t8abB/Jy9NFuwujTB+7X4XAxGqMei9w4QM4Ml9hlngPdHF7xq8hEq50HG9DhKc+swIne3H9ZWlpnRwna9KxB0DerbNki0ClbzqWuvIZmzf9YzUZHfRAQfSuzhJT1/BlmDmTzRel0q/1exqyzleQFl1dmb4wErD64iemohgYdLioDwHqXivKuNBLULdM/pt2E9yh6HJGNf6FwZ5zkjl0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:22 np0005603663 python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:23 np0005603663 python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:23 np0005603663 python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844742.9382787-207-145605384033662/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b80ea159c8054a21a4460cdc1f619690_id_rsa follow=False checksum=a281031e2470a2409ffaebd8f471464a1a03b1ee backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:24 np0005603663 python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:24 np0005603663 python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844743.8844178-240-192301579928147/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b80ea159c8054a21a4460cdc1f619690_id_rsa.pub follow=False checksum=20ff6b517a1b3593179ca6b6f5da64fb7270f957 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:25 np0005603663 python3[4979]: ansible-ping Invoked with data=pong
Jan 31 02:32:26 np0005603663 python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:32:29 np0005603663 python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 02:32:30 np0005603663 python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:30 np0005603663 python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:30 np0005603663 python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:31 np0005603663 python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:31 np0005603663 python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:31 np0005603663 python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:33 np0005603663 python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:33 np0005603663 python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:34 np0005603663 python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844753.5735636-21-83259087156497/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:35 np0005603663 python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:35 np0005603663 python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:35 np0005603663 python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:35 np0005603663 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:36 np0005603663 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:36 np0005603663 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:36 np0005603663 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:37 np0005603663 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:37 np0005603663 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:37 np0005603663 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:37 np0005603663 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:38 np0005603663 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:38 np0005603663 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:38 np0005603663 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:38 np0005603663 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:39 np0005603663 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:39 np0005603663 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:39 np0005603663 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 02:32:39 np0005603663 irqbalance[789]: IRQ 26 affinity is now unmanaged
Jan 31 02:32:39 np0005603663 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:39 np0005603663 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:40 np0005603663 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:40 np0005603663 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:40 np0005603663 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:40 np0005603663 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:41 np0005603663 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:41 np0005603663 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:41 np0005603663 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:32:45 np0005603663 python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 02:32:45 np0005603663 systemd[1]: Starting Time & Date Service...
Jan 31 02:32:45 np0005603663 systemd[1]: Started Time & Date Service.
Jan 31 02:32:45 np0005603663 systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 31 02:32:45 np0005603663 python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:46 np0005603663 python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:46 np0005603663 python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769844766.0813003-153-54903357118270/source _original_basename=tmp05qh8us3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:47 np0005603663 python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:47 np0005603663 python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769844766.94846-183-101293122512547/source _original_basename=tmpg6pll65s follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:48 np0005603663 python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:48 np0005603663 python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769844768.0524254-231-119833535994984/source _original_basename=tmpn1h2xzgv follow=False checksum=6bf095e75b543d66829428b8a294812d38465cfe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:49 np0005603663 python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:32:49 np0005603663 python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:32:49 np0005603663 python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:32:50 np0005603663 python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844769.7094617-273-50625801847282/source _original_basename=tmp8y66r_f4 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:51 np0005603663 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-1870-bf1a-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:32:51 np0005603663 python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-1870-bf1a-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 02:32:52 np0005603663 python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:59 np0005603663 irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 02:32:59 np0005603663 irqbalance[789]: IRQ 27 affinity is now unmanaged
Jan 31 02:33:15 np0005603663 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 02:33:16 np0005603663 python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 02:33:50 np0005603663 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 02:33:50 np0005603663 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4266] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 02:33:50 np0005603663 systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4437] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4463] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4468] device (eth1): carrier: link connected
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4471] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4478] policy: auto-activating connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c)
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4484] device (eth1): Activation: starting connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c)
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4486] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4490] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4495] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:33:50 np0005603663 NetworkManager[861]: <info>  [1769844830.4499] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:33:51 np0005603663 python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-4cef-bc83-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:34:01 np0005603663 python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:34:01 np0005603663 python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844840.7602708-102-256474702195950/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b65d917452bf41b08bb7a11e59261a67db6a7912 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:34:02 np0005603663 python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:34:02 np0005603663 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 02:34:02 np0005603663 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 02:34:02 np0005603663 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 02:34:02 np0005603663 systemd[1]: Stopping Network Manager...
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5083] caught SIGTERM, shutting down normally.
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5097] dhcp4 (eth0): canceled DHCP transaction
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5098] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5098] dhcp4 (eth0): state changed no lease
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5104] manager: NetworkManager state is now CONNECTING
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5213] dhcp4 (eth1): canceled DHCP transaction
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5213] dhcp4 (eth1): state changed no lease
Jan 31 02:34:02 np0005603663 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:34:02 np0005603663 NetworkManager[861]: <info>  [1769844842.5259] exiting (success)
Jan 31 02:34:02 np0005603663 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:34:02 np0005603663 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 02:34:02 np0005603663 systemd[1]: Stopped Network Manager.
Jan 31 02:34:02 np0005603663 systemd[1]: NetworkManager.service: Consumed 1.311s CPU time, 10.1M memory peak.
Jan 31 02:34:02 np0005603663 systemd[1]: Starting Network Manager...
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.5662] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:46d0e983-b0c8-47a0-b578-409408b2d808)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.5665] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.5723] manager[0x563a90aee000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 02:34:02 np0005603663 systemd[1]: Starting Hostname Service...
Jan 31 02:34:02 np0005603663 systemd[1]: Started Hostname Service.
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6400] hostname: hostname: using hostnamed
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6402] hostname: static hostname changed from (none) to "np0005603663.novalocal"
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6407] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6414] manager[0x563a90aee000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6415] manager[0x563a90aee000]: rfkill: WWAN hardware radio set enabled
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6440] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6440] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6441] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6442] manager: Networking is enabled by state file
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6444] settings: Loaded settings plugin: keyfile (internal)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6447] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6468] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6475] dhcp: init: Using DHCP client 'internal'
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6477] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6480] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6484] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6489] device (lo): Activation: starting connection 'lo' (4e410dfc-e55f-4386-a962-128f9b1580ba)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6495] device (eth0): carrier: link connected
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6498] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6502] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6503] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6507] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6511] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6515] device (eth1): carrier: link connected
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6520] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6524] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c) (indicated)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6524] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6527] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6533] device (eth1): Activation: starting connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c)
Jan 31 02:34:02 np0005603663 systemd[1]: Started Network Manager.
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6539] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6543] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6545] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6546] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6548] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6550] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6551] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6554] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6556] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6560] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6562] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6568] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6570] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6589] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6590] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6595] device (lo): Activation: successful, device activated.
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6601] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6606] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6674] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 systemd[1]: Starting Network Manager Wait Online...
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6697] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6698] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6701] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6704] device (eth0): Activation: successful, device activated.
Jan 31 02:34:02 np0005603663 NetworkManager[7191]: <info>  [1769844842.6709] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 02:34:02 np0005603663 python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-4cef-bc83-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:34:12 np0005603663 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:34:32 np0005603663 systemd[4307]: Starting Mark boot as successful...
Jan 31 02:34:32 np0005603663 systemd[4307]: Finished Mark boot as successful.
Jan 31 02:34:32 np0005603663 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6304] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 02:34:47 np0005603663 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:34:47 np0005603663 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6682] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6691] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6708] device (eth1): Activation: successful, device activated.
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6721] manager: startup complete
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6723] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <warn>  [1769844887.6741] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6751] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 systemd[1]: Finished Network Manager Wait Online.
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6893] dhcp4 (eth1): canceled DHCP transaction
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6894] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6894] dhcp4 (eth1): state changed no lease
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6908] policy: auto-activating connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6913] device (eth1): Activation: starting connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6914] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6917] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6924] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6933] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6974] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6975] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:34:47 np0005603663 NetworkManager[7191]: <info>  [1769844887.6979] device (eth1): Activation: successful, device activated.
Jan 31 02:34:57 np0005603663 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:35:03 np0005603663 systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Jan 31 02:35:06 np0005603663 systemd-logind[793]: New session 3 of user zuul.
Jan 31 02:35:06 np0005603663 systemd[1]: Started Session 3 of User zuul.
Jan 31 02:35:06 np0005603663 python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:35:07 np0005603663 python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844906.669143-267-82077250691609/source _original_basename=tmpss547bu9 follow=False checksum=1f1caabb57b4a4203f0a901b5db5015b865079c5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:09 np0005603663 systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 02:35:09 np0005603663 systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Jan 31 02:35:09 np0005603663 systemd-logind[793]: Removed session 3.
Jan 31 02:37:32 np0005603663 systemd[4307]: Created slice User Background Tasks Slice.
Jan 31 02:37:32 np0005603663 systemd[4307]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 02:37:32 np0005603663 systemd[4307]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 02:40:00 np0005603663 systemd-logind[793]: New session 4 of user zuul.
Jan 31 02:40:00 np0005603663 systemd[1]: Started Session 4 of User zuul.
Jan 31 02:40:00 np0005603663 python3[7510]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-0250-99f0-000000002159-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:40:01 np0005603663 python3[7539]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:40:01 np0005603663 python3[7565]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:40:01 np0005603663 python3[7591]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:40:01 np0005603663 python3[7617]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:40:02 np0005603663 python3[7643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:40:03 np0005603663 python3[7721]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:40:03 np0005603663 python3[7794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845203.037129-488-173544911627575/source _original_basename=tmpbqhi5qh2 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:40:04 np0005603663 python3[7844]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:40:04 np0005603663 systemd[1]: Reloading.
Jan 31 02:40:04 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:40:06 np0005603663 python3[7900]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 02:40:06 np0005603663 python3[7926]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:40:07 np0005603663 python3[7954]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:40:07 np0005603663 python3[7982]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:40:07 np0005603663 python3[8010]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:40:08 np0005603663 python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-0250-99f0-000000002160-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:40:08 np0005603663 python3[8067]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:40:10 np0005603663 systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Jan 31 02:40:10 np0005603663 systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 02:40:10 np0005603663 systemd[1]: session-4.scope: Consumed 3.728s CPU time.
Jan 31 02:40:10 np0005603663 systemd-logind[793]: Removed session 4.
Jan 31 02:40:12 np0005603663 systemd-logind[793]: New session 5 of user zuul.
Jan 31 02:40:12 np0005603663 systemd[1]: Started Session 5 of User zuul.
Jan 31 02:40:12 np0005603663 python3[8104]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 02:40:20 np0005603663 setsebool[8147]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 02:40:20 np0005603663 setsebool[8147]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 02:40:37 np0005603663 kernel: SELinux:  Converting 385 SID table entries...
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:40:37 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:40:47 np0005603663 kernel: SELinux:  Converting 388 SID table entries...
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:40:47 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:41:04 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 02:41:05 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:41:05 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:41:05 np0005603663 systemd[1]: Reloading.
Jan 31 02:41:05 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:41:05 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:41:07 np0005603663 python3[10701]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-c553-bc95-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:41:07 np0005603663 kernel: evm: overlay not supported
Jan 31 02:41:07 np0005603663 systemd[4307]: Starting D-Bus User Message Bus...
Jan 31 02:41:07 np0005603663 dbus-broker-launch[11877]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 02:41:07 np0005603663 dbus-broker-launch[11877]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 02:41:07 np0005603663 systemd[4307]: Started D-Bus User Message Bus.
Jan 31 02:41:07 np0005603663 dbus-broker-lau[11877]: Ready
Jan 31 02:41:07 np0005603663 systemd[4307]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 02:41:07 np0005603663 systemd[4307]: Created slice Slice /user.
Jan 31 02:41:07 np0005603663 systemd[4307]: podman-11747.scope: unit configures an IP firewall, but not running as root.
Jan 31 02:41:07 np0005603663 systemd[4307]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 02:41:07 np0005603663 systemd[4307]: Started podman-11747.scope.
Jan 31 02:41:08 np0005603663 systemd[4307]: Started podman-pause-7512294e.scope.
Jan 31 02:41:09 np0005603663 python3[12965]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.245:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.245:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:41:09 np0005603663 python3[12965]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 02:41:09 np0005603663 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 02:41:09 np0005603663 systemd[1]: session-5.scope: Consumed 39.903s CPU time.
Jan 31 02:41:09 np0005603663 systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Jan 31 02:41:09 np0005603663 systemd-logind[793]: Removed session 5.
Jan 31 02:41:32 np0005603663 systemd-logind[793]: New session 6 of user zuul.
Jan 31 02:41:32 np0005603663 systemd[1]: Started Session 6 of User zuul.
Jan 31 02:41:33 np0005603663 python3[26172]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjjUgFuyNt0hmZtqStAw9s3JKw0g6jz1BiB4AD1tE2sQNpVPKYzLIUbhhGJMGEywRb0aZD3E65SfsYEJ5sq0hg= zuul@np0005603662.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:41:33 np0005603663 python3[26394]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjjUgFuyNt0hmZtqStAw9s3JKw0g6jz1BiB4AD1tE2sQNpVPKYzLIUbhhGJMGEywRb0aZD3E65SfsYEJ5sq0hg= zuul@np0005603662.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:41:34 np0005603663 python3[26856]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603663.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 02:41:34 np0005603663 python3[27101]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjjUgFuyNt0hmZtqStAw9s3JKw0g6jz1BiB4AD1tE2sQNpVPKYzLIUbhhGJMGEywRb0aZD3E65SfsYEJ5sq0hg= zuul@np0005603662.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 02:41:35 np0005603663 python3[27396]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:41:35 np0005603663 python3[27705]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845294.9553869-135-181466581987942/source _original_basename=tmphrrxpvdr follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:41:36 np0005603663 python3[28065]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 02:41:36 np0005603663 systemd[1]: Starting Hostname Service...
Jan 31 02:41:36 np0005603663 systemd[1]: Started Hostname Service.
Jan 31 02:41:36 np0005603663 systemd-hostnamed[28173]: Changed pretty hostname to 'compute-0'
Jan 31 02:41:36 np0005603663 systemd-hostnamed[28173]: Hostname set to <compute-0> (static)
Jan 31 02:41:36 np0005603663 NetworkManager[7191]: <info>  [1769845296.5788] hostname: static hostname changed from "np0005603663.novalocal" to "compute-0"
Jan 31 02:41:36 np0005603663 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:41:36 np0005603663 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:41:36 np0005603663 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 02:41:36 np0005603663 systemd[1]: session-6.scope: Consumed 2.166s CPU time.
Jan 31 02:41:36 np0005603663 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Jan 31 02:41:36 np0005603663 systemd-logind[793]: Removed session 6.
Jan 31 02:41:40 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:41:40 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:41:40 np0005603663 systemd[1]: man-db-cache-update.service: Consumed 41.050s CPU time.
Jan 31 02:41:40 np0005603663 systemd[1]: run-racebbd5a8ccf495490f48e06e73692f3.service: Deactivated successfully.
Jan 31 02:41:46 np0005603663 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:42:06 np0005603663 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 02:45:56 np0005603663 systemd-logind[793]: New session 7 of user zuul.
Jan 31 02:45:56 np0005603663 systemd[1]: Started Session 7 of User zuul.
Jan 31 02:45:57 np0005603663 python3[30072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:45:58 np0005603663 python3[30188]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:45:59 np0005603663 python3[30261]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:45:59 np0005603663 python3[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:45:59 np0005603663 python3[30360]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:46:00 np0005603663 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:46:00 np0005603663 python3[30459]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:46:00 np0005603663 python3[30485]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:46:01 np0005603663 python3[30558]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:46:01 np0005603663 python3[30584]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:46:01 np0005603663 python3[30657]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:46:01 np0005603663 python3[30683]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:46:02 np0005603663 python3[30756]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:46:02 np0005603663 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:46:02 np0005603663 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:46:17 np0005603663 python3[30913]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:47:02 np0005603663 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 02:47:02 np0005603663 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 02:47:02 np0005603663 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 02:47:02 np0005603663 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 02:51:17 np0005603663 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Jan 31 02:51:17 np0005603663 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 02:51:17 np0005603663 systemd[1]: session-7.scope: Consumed 4.465s CPU time.
Jan 31 02:51:17 np0005603663 systemd-logind[793]: Removed session 7.
Jan 31 02:57:51 np0005603663 systemd-logind[793]: New session 8 of user zuul.
Jan 31 02:57:51 np0005603663 systemd[1]: Started Session 8 of User zuul.
Jan 31 02:57:52 np0005603663 python3.9[31084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:57:53 np0005603663 python3.9[31265]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:58:05 np0005603663 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 02:58:05 np0005603663 systemd[1]: session-8.scope: Consumed 7.996s CPU time.
Jan 31 02:58:05 np0005603663 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Jan 31 02:58:05 np0005603663 systemd-logind[793]: Removed session 8.
Jan 31 02:58:21 np0005603663 systemd-logind[793]: New session 9 of user zuul.
Jan 31 02:58:21 np0005603663 systemd[1]: Started Session 9 of User zuul.
Jan 31 02:58:21 np0005603663 python3.9[31476]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 02:58:22 np0005603663 python3.9[31650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:58:23 np0005603663 python3.9[31802]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:58:24 np0005603663 python3.9[31955]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:58:25 np0005603663 python3.9[32107]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:58:26 np0005603663 python3.9[32259]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:58:27 np0005603663 python3.9[32382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846305.8833394-68-166759595749535/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:58:27 np0005603663 python3.9[32534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:58:28 np0005603663 python3.9[32690]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:58:29 np0005603663 python3.9[32842]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:58:29 np0005603663 python3.9[32992]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:58:32 np0005603663 python3.9[33245]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:58:33 np0005603663 python3.9[33395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:58:34 np0005603663 python3.9[33549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:58:35 np0005603663 python3.9[33707]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:58:36 np0005603663 python3.9[33791]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:59:17 np0005603663 systemd[1]: Reloading.
Jan 31 02:59:17 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:59:17 np0005603663 systemd[1]: Starting dnf makecache...
Jan 31 02:59:17 np0005603663 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 02:59:18 np0005603663 dnf[34001]: Failed determining last makecache time.
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-barbican-42b4c41831408a8e323 151 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-python-glean-642fffe0203a8ffcc2443db52 177 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-cinder-1c00d6490d88e436f26ef 173 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-python-stevedore-c4acc5639fd2329372142 190 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-python-cloudkitty-tests-tempest-783703 184 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-diskimage-builder-61b717cc45660834fe9a 166 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-nova-eaa65f0b85123a4ee343246 159 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 systemd[1]: Reloading.
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-python-designate-tests-tempest-347fdbc 166 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-glance-1fd12c29b339f30fe823e 154 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 137 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-manila-d783d10e75495b73866db 162 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-neutron-95cadbd379667c8520c8 161 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-octavia-5975097dd4b021385178 194 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-watcher-c014f81a8647287f6dcc 175 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-python-tcib-78032d201b02cee27e8e644c61 191 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 187 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-swift-dc98a8463506ac520c469a 129 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-python-tempestconf-8515371b7cceebd4282 193 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 dnf[34001]: delorean-openstack-heat-ui-013accbfd179753bc3f0 197 kB/s | 3.0 kB     00:00
Jan 31 02:59:18 np0005603663 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 02:59:18 np0005603663 systemd[1]: Reloading.
Jan 31 02:59:18 np0005603663 dnf[34001]: CentOS Stream 9 - BaseOS                         60 kB/s | 6.1 kB     00:00
Jan 31 02:59:18 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:59:18 np0005603663 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 02:59:18 np0005603663 dnf[34001]: CentOS Stream 9 - AppStream                      63 kB/s | 6.5 kB     00:00
Jan 31 02:59:18 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 02:59:18 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 02:59:18 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 02:59:18 np0005603663 dnf[34001]: CentOS Stream 9 - CRB                            60 kB/s | 6.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: CentOS Stream 9 - Extras packages                73 kB/s | 7.3 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: dlrn-antelope-testing                           146 kB/s | 3.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: dlrn-antelope-build-deps                        147 kB/s | 3.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: centos9-rabbitmq                                122 kB/s | 3.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: centos9-storage                                 125 kB/s | 3.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: centos9-opstools                                139 kB/s | 3.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: NFV SIG OpenvSwitch                             138 kB/s | 3.0 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: repo-setup-centos-appstream                     179 kB/s | 4.4 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: repo-setup-centos-baseos                        119 kB/s | 3.9 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: repo-setup-centos-highavailability              115 kB/s | 3.9 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: repo-setup-centos-powertools                    168 kB/s | 4.3 kB     00:00
Jan 31 02:59:19 np0005603663 dnf[34001]: Extra Packages for Enterprise Linux 9 - x86_64  233 kB/s |  31 kB     00:00
Jan 31 02:59:20 np0005603663 dnf[34001]: Metadata cache created.
Jan 31 02:59:20 np0005603663 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 02:59:20 np0005603663 systemd[1]: Finished dnf makecache.
Jan 31 02:59:20 np0005603663 systemd[1]: dnf-makecache.service: Consumed 1.731s CPU time.
Jan 31 03:00:24 np0005603663 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 03:00:24 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 03:00:24 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 03:00:24 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:00:24 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:00:24 np0005603663 systemd[1]: Reloading.
Jan 31 03:00:24 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:00:24 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:00:25 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:00:25 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:00:25 np0005603663 systemd[1]: run-refecc68b3f4743bf9e3a292f814a0b05.service: Deactivated successfully.
Jan 31 03:00:26 np0005603663 python3.9[35355]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:00:28 np0005603663 python3.9[35636]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 03:00:29 np0005603663 python3.9[35788]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 03:00:32 np0005603663 python3.9[35941]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:00:33 np0005603663 python3.9[36093]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 03:00:34 np0005603663 python3.9[36245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:00:40 np0005603663 python3.9[36397]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:00:40 np0005603663 python3.9[36520]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846434.9325702-231-227987619527251/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:00:41 np0005603663 python3.9[36672]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:00:42 np0005603663 python3.9[36824]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:00:42 np0005603663 python3.9[36977]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:00:43 np0005603663 python3.9[37129]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 03:00:43 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:00:43 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:00:44 np0005603663 python3.9[37283]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 03:00:45 np0005603663 python3.9[37441]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 03:00:45 np0005603663 python3.9[37601]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 03:00:46 np0005603663 python3.9[37754]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 03:00:47 np0005603663 python3.9[37912]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 03:00:47 np0005603663 python3.9[38064]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:00:49 np0005603663 python3.9[38217]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:00:50 np0005603663 python3.9[38369]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:00:51 np0005603663 python3.9[38492]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846450.1510189-350-265004746518100/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:00:52 np0005603663 python3.9[38644]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:00:52 np0005603663 systemd[1]: Starting Load Kernel Modules...
Jan 31 03:00:52 np0005603663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 03:00:52 np0005603663 kernel: Bridge firewalling registered
Jan 31 03:00:52 np0005603663 systemd-modules-load[38648]: Inserted module 'br_netfilter'
Jan 31 03:00:52 np0005603663 systemd[1]: Finished Load Kernel Modules.
Jan 31 03:00:52 np0005603663 python3.9[38804]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:00:53 np0005603663 python3.9[38927]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846452.3765864-373-118189603131491/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:00:53 np0005603663 python3.9[39079]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:00:56 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 03:00:57 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 03:00:57 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:00:57 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:00:57 np0005603663 systemd[1]: Reloading.
Jan 31 03:00:57 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:00:57 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:00:58 np0005603663 python3.9[40631]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:00:59 np0005603663 python3.9[41801]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 03:01:00 np0005603663 python3.9[42667]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:01:00 np0005603663 python3.9[43310]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:01:01 np0005603663 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 03:01:01 np0005603663 systemd[1]: Starting Authorization Manager...
Jan 31 03:01:01 np0005603663 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 03:01:01 np0005603663 polkitd[43527]: Started polkitd version 0.117
Jan 31 03:01:01 np0005603663 systemd[1]: Started Authorization Manager.
Jan 31 03:01:01 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:01:01 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:01:01 np0005603663 systemd[1]: man-db-cache-update.service: Consumed 3.680s CPU time.
Jan 31 03:01:01 np0005603663 systemd[1]: run-rf67aeb44f1144730aad031132dccd811.service: Deactivated successfully.
Jan 31 03:01:02 np0005603663 python3.9[43713]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:01:02 np0005603663 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 03:01:02 np0005603663 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 03:01:02 np0005603663 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 03:01:02 np0005603663 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 03:01:02 np0005603663 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 03:01:03 np0005603663 python3.9[43875]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 03:01:05 np0005603663 python3.9[44027]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:01:05 np0005603663 systemd[1]: Reloading.
Jan 31 03:01:05 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:01:06 np0005603663 python3.9[44216]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:01:06 np0005603663 systemd[1]: Reloading.
Jan 31 03:01:06 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:01:07 np0005603663 python3.9[44404]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:01:08 np0005603663 python3.9[44557]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:01:08 np0005603663 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 03:01:09 np0005603663 python3.9[44710]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:01:11 np0005603663 python3.9[44872]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:01:12 np0005603663 python3.9[45025]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:01:12 np0005603663 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 03:01:12 np0005603663 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 03:01:12 np0005603663 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 03:01:12 np0005603663 systemd[1]: Starting Apply Kernel Variables...
Jan 31 03:01:12 np0005603663 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 03:01:12 np0005603663 systemd[1]: Finished Apply Kernel Variables.
Jan 31 03:01:12 np0005603663 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 03:01:12 np0005603663 systemd[1]: session-9.scope: Consumed 2min 2.094s CPU time.
Jan 31 03:01:12 np0005603663 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Jan 31 03:01:12 np0005603663 systemd-logind[793]: Removed session 9.
Jan 31 03:01:17 np0005603663 systemd-logind[793]: New session 10 of user zuul.
Jan 31 03:01:17 np0005603663 systemd[1]: Started Session 10 of User zuul.
Jan 31 03:01:18 np0005603663 python3.9[45208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:01:19 np0005603663 python3.9[45364]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 03:01:20 np0005603663 python3.9[45517]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 03:01:21 np0005603663 python3.9[45675]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 03:01:22 np0005603663 python3.9[45835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:01:23 np0005603663 python3.9[45919]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 03:01:28 np0005603663 python3.9[46082]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:01:42 np0005603663 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 03:01:42 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 03:01:43 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 03:01:43 np0005603663 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 03:01:45 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:01:45 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:01:45 np0005603663 systemd[1]: Reloading.
Jan 31 03:01:45 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:01:45 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:01:45 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:01:49 np0005603663 python3.9[47183]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:01:49 np0005603663 systemd[1]: Reloading.
Jan 31 03:01:49 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:01:49 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:01:49 np0005603663 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 03:01:49 np0005603663 chown[47224]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 03:01:49 np0005603663 ovs-ctl[47229]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 03:01:49 np0005603663 ovs-ctl[47229]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 03:01:49 np0005603663 ovs-ctl[47229]: Starting ovsdb-server [  OK  ]
Jan 31 03:01:49 np0005603663 ovs-vsctl[47278]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 03:01:49 np0005603663 ovs-vsctl[47298]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c8bc61c4-1b90-42d4-9c52-3d83532ede66\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 03:01:49 np0005603663 ovs-ctl[47229]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 03:01:49 np0005603663 ovs-ctl[47229]: Enabling remote OVSDB managers [  OK  ]
Jan 31 03:01:49 np0005603663 ovs-vsctl[47304]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 03:01:49 np0005603663 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 03:01:49 np0005603663 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 03:01:49 np0005603663 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 03:01:49 np0005603663 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 03:01:49 np0005603663 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 03:01:49 np0005603663 ovs-ctl[47349]: Inserting openvswitch module [  OK  ]
Jan 31 03:01:50 np0005603663 ovs-ctl[47317]: Starting ovs-vswitchd [  OK  ]
Jan 31 03:01:50 np0005603663 ovs-ctl[47317]: Enabling remote OVSDB managers [  OK  ]
Jan 31 03:01:50 np0005603663 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 03:01:50 np0005603663 ovs-vsctl[47366]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 03:01:50 np0005603663 systemd[1]: Starting Open vSwitch...
Jan 31 03:01:50 np0005603663 systemd[1]: Finished Open vSwitch.
Jan 31 03:01:50 np0005603663 python3.9[47518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:01:50 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:01:50 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:01:50 np0005603663 systemd[1]: run-rec31d1dc21a242df9f010955ddf265db.service: Deactivated successfully.
Jan 31 03:01:51 np0005603663 python3.9[47671]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 03:01:54 np0005603663 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 03:01:54 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 03:01:55 np0005603663 python3.9[47826]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:01:56 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 03:01:56 np0005603663 python3.9[47984]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:01:58 np0005603663 python3.9[48137]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:01:59 np0005603663 python3.9[48424]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 03:02:00 np0005603663 python3.9[48574]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:02:01 np0005603663 python3.9[48728]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:02:03 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:02:03 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:02:03 np0005603663 systemd[1]: Reloading.
Jan 31 03:02:03 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:02:03 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:02:03 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:02:03 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:02:03 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:02:03 np0005603663 systemd[1]: run-r9af21980559144b7b34fe72aa952b5f0.service: Deactivated successfully.
Jan 31 03:02:04 np0005603663 python3.9[49045]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:02:04 np0005603663 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 03:02:04 np0005603663 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 03:02:04 np0005603663 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 03:02:04 np0005603663 systemd[1]: Stopping Network Manager...
Jan 31 03:02:04 np0005603663 NetworkManager[7191]: <info>  [1769846524.3630] caught SIGTERM, shutting down normally.
Jan 31 03:02:04 np0005603663 NetworkManager[7191]: <info>  [1769846524.3643] dhcp4 (eth0): canceled DHCP transaction
Jan 31 03:02:04 np0005603663 NetworkManager[7191]: <info>  [1769846524.3643] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 03:02:04 np0005603663 NetworkManager[7191]: <info>  [1769846524.3643] dhcp4 (eth0): state changed no lease
Jan 31 03:02:04 np0005603663 NetworkManager[7191]: <info>  [1769846524.3645] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 03:02:04 np0005603663 NetworkManager[7191]: <info>  [1769846524.3696] exiting (success)
Jan 31 03:02:04 np0005603663 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 03:02:04 np0005603663 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 03:02:04 np0005603663 systemd[1]: Stopped Network Manager.
Jan 31 03:02:04 np0005603663 systemd[1]: NetworkManager.service: Consumed 14.651s CPU time, 4.1M memory peak, read 0B from disk, written 35.0K to disk.
Jan 31 03:02:04 np0005603663 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 03:02:04 np0005603663 systemd[1]: Starting Network Manager...
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4189] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:46d0e983-b0c8-47a0-b578-409408b2d808)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4190] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4233] manager[0x55c1ea6eb000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 03:02:04 np0005603663 systemd[1]: Starting Hostname Service...
Jan 31 03:02:04 np0005603663 systemd[1]: Started Hostname Service.
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4914] hostname: hostname: using hostnamed
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4916] hostname: static hostname changed from (none) to "compute-0"
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4927] manager[0x55c1ea6eb000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4931] manager[0x55c1ea6eb000]: rfkill: WWAN hardware radio set enabled
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4949] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4960] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4961] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4962] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4963] manager: Networking is enabled by state file
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4965] settings: Loaded settings plugin: keyfile (internal)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4970] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.4996] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5005] dhcp: init: Using DHCP client 'internal'
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5008] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5014] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5021] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5030] device (lo): Activation: starting connection 'lo' (4e410dfc-e55f-4386-a962-128f9b1580ba)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5035] device (eth0): carrier: link connected
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5040] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5048] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5049] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5056] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5066] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5071] device (eth1): carrier: link connected
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5075] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5082] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631) (indicated)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5083] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5091] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5101] device (eth1): Activation: starting connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 03:02:04 np0005603663 systemd[1]: Started Network Manager.
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5107] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5120] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5124] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5127] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5130] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5137] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5140] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5145] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5148] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5156] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5160] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5168] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5184] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5193] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5198] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5204] device (lo): Activation: successful, device activated.
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5214] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5218] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5223] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 03:02:04 np0005603663 NetworkManager[49054]: <info>  [1769846524.5228] device (eth1): Activation: successful, device activated.
Jan 31 03:02:04 np0005603663 systemd[1]: Starting Network Manager Wait Online...
Jan 31 03:02:05 np0005603663 python3.9[49252]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.4000] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.4009] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.5952] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.5986] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.5988] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.5991] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.5993] device (eth0): Activation: successful, device activated.
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.5998] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 03:02:06 np0005603663 NetworkManager[49054]: <info>  [1769846526.6001] manager: startup complete
Jan 31 03:02:06 np0005603663 systemd[1]: Finished Network Manager Wait Online.
Jan 31 03:02:10 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:02:10 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:02:10 np0005603663 systemd[1]: Reloading.
Jan 31 03:02:11 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:02:11 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:02:11 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:02:12 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:02:12 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:02:12 np0005603663 systemd[1]: run-rdce7cd79d0ce4e5c9df465071fbc7bee.service: Deactivated successfully.
Jan 31 03:02:12 np0005603663 python3.9[49732]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:02:13 np0005603663 python3.9[49884]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:14 np0005603663 python3.9[50038]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:14 np0005603663 python3.9[50190]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:15 np0005603663 python3.9[50342]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:16 np0005603663 python3.9[50494]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:16 np0005603663 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 03:02:16 np0005603663 python3.9[50646]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:02:17 np0005603663 python3.9[50769]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846536.300346-224-248937413928306/.source _original_basename=.5ygojt6x follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:17 np0005603663 python3.9[50921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:18 np0005603663 python3.9[51073]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 03:02:19 np0005603663 python3.9[51225]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:21 np0005603663 python3.9[51652]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 03:02:22 np0005603663 ansible-async_wrapper.py[51827]: Invoked with j484590135125 300 /home/zuul/.ansible/tmp/ansible-tmp-1769846541.7012212-290-27978434884099/AnsiballZ_edpm_os_net_config.py _
Jan 31 03:02:22 np0005603663 ansible-async_wrapper.py[51830]: Starting module and watcher
Jan 31 03:02:22 np0005603663 ansible-async_wrapper.py[51830]: Start watching 51831 (300)
Jan 31 03:02:22 np0005603663 ansible-async_wrapper.py[51831]: Start module (51831)
Jan 31 03:02:22 np0005603663 ansible-async_wrapper.py[51827]: Return async_wrapper task started.
Jan 31 03:02:22 np0005603663 python3.9[51832]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 03:02:23 np0005603663 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 03:02:23 np0005603663 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 03:02:23 np0005603663 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 03:02:23 np0005603663 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 03:02:23 np0005603663 kernel: cfg80211: failed to load regulatory.db
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2027] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2048] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2614] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2616] audit: op="connection-add" uuid="9ee11f4b-31f1-43a5-8f98-0087a73970cd" name="br-ex-br" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2628] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2629] audit: op="connection-add" uuid="6f4f3e81-680d-4928-b50c-a7b92a461894" name="br-ex-port" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2640] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2641] audit: op="connection-add" uuid="d5a2bc86-9453-4ca9-bca0-3397e82c83de" name="eth1-port" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2653] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2654] audit: op="connection-add" uuid="aaa52c88-497c-414c-915e-b58dc0f180a8" name="vlan20-port" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2665] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2666] audit: op="connection-add" uuid="d506542e-c2b4-431f-8734-c2c067b68112" name="vlan21-port" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2677] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2679] audit: op="connection-add" uuid="62acf6a5-d2ed-4d60-a333-951bcdf17f4b" name="vlan22-port" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2688] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2690] audit: op="connection-add" uuid="26e21aa9-d0c6-4793-8106-8ace15032fc8" name="vlan23-port" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2709] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2725] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2727] audit: op="connection-add" uuid="ce25c6e1-6866-4c39-aed6-0b91b3458a87" name="br-ex-if" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2769] audit: op="connection-update" uuid="b1c2768f-0cc5-558f-b3d7-45fa9d4a2631" name="ci-private-network" args="ovs-external-ids.data,connection.master,connection.controller,connection.port-type,connection.timestamp,connection.slave-type,ovs-interface.type,ipv6.routes,ipv6.addresses,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv4.addresses,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routes" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2783] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2785] audit: op="connection-add" uuid="2701d975-ede4-4c82-a55d-27ab5cda392a" name="vlan20-if" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2799] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2801] audit: op="connection-add" uuid="ee581e17-f1bb-4e5e-89c7-5ea2dffcb049" name="vlan21-if" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2815] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2817] audit: op="connection-add" uuid="5c088239-6ddc-4503-b787-c018fb7084c7" name="vlan22-if" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2831] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2833] audit: op="connection-add" uuid="b4f0b405-f89a-494e-ab03-ec55ecb4f1de" name="vlan23-if" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2843] audit: op="connection-delete" uuid="ab34d4aa-4908-314b-843b-ee48e300858c" name="Wired connection 1" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2854] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2856] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2863] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2867] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9ee11f4b-31f1-43a5-8f98-0087a73970cd)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2868] audit: op="connection-activate" uuid="9ee11f4b-31f1-43a5-8f98-0087a73970cd" name="br-ex-br" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2870] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2872] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2877] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2881] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6f4f3e81-680d-4928-b50c-a7b92a461894)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2883] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2885] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2889] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2893] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d5a2bc86-9453-4ca9-bca0-3397e82c83de)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2895] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2897] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2902] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2906] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (aaa52c88-497c-414c-915e-b58dc0f180a8)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2908] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2910] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2915] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2919] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (d506542e-c2b4-431f-8734-c2c067b68112)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2921] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2922] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2927] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2932] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (62acf6a5-d2ed-4d60-a333-951bcdf17f4b)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2934] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2935] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2940] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2945] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (26e21aa9-d0c6-4793-8106-8ace15032fc8)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2946] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2949] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2951] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2957] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.2959] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2962] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2966] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ce25c6e1-6866-4c39-aed6-0b91b3458a87)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2967] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2971] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2974] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2976] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2978] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2987] device (eth1): disconnecting for new activation request.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2988] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2991] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2994] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2996] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.2999] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.3000] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3004] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3008] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (2701d975-ede4-4c82-a55d-27ab5cda392a)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3009] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3012] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3015] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3016] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3019] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.3021] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3024] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3028] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (ee581e17-f1bb-4e5e-89c7-5ea2dffcb049)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3030] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3033] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3035] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3037] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3040] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.3041] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3045] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3049] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (5c088239-6ddc-4503-b787-c018fb7084c7)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3050] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3055] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3057] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3059] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3062] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <warn>  [1769846544.3063] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3067] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3071] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b4f0b405-f89a-494e-ab03-ec55ecb4f1de)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3072] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3076] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3078] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3080] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3081] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3093] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3095] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3098] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3101] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3107] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3111] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3115] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3119] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3121] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3134] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 kernel: ovs-system: entered promiscuous mode
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3138] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3142] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 systemd-udevd[51838]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3144] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3151] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3156] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 03:02:24 np0005603663 kernel: Timeout policy base is empty
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3160] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3162] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3166] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3169] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3172] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3174] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3178] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3181] dhcp4 (eth0): canceled DHCP transaction
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3181] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3181] dhcp4 (eth0): state changed no lease
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3183] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3190] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3192] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51833 uid=0 result="fail" reason="Device is not activated"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3267] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3271] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3277] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3328] device (eth1): disconnecting for new activation request.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3329] audit: op="connection-activate" uuid="b1c2768f-0cc5-558f-b3d7-45fa9d4a2631" name="ci-private-network" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3330] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3339] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 03:02:24 np0005603663 kernel: br-ex: entered promiscuous mode
Jan 31 03:02:24 np0005603663 kernel: vlan22: entered promiscuous mode
Jan 31 03:02:24 np0005603663 systemd-udevd[51839]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3538] device (eth1): Activation: starting connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3553] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3554] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3559] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3560] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3561] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3563] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3564] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3569] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 kernel: vlan21: entered promiscuous mode
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3591] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3595] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3600] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3603] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3607] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3611] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3614] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3617] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3620] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3623] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3634] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3637] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3639] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3641] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 kernel: vlan20: entered promiscuous mode
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3645] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3648] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3656] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3661] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3668] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3680] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3717] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3725] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3727] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3737] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3743] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3744] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3759] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 kernel: vlan23: entered promiscuous mode
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3782] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3826] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3828] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3833] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3838] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3840] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3842] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3846] device (eth1): Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3850] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3854] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3859] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3860] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3877] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3880] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3885] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3888] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3895] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3907] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3946] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3947] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 03:02:24 np0005603663 NetworkManager[49054]: <info>  [1769846544.3950] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 03:02:25 np0005603663 NetworkManager[49054]: <info>  [1769846545.5143] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 03:02:25 np0005603663 NetworkManager[49054]: <info>  [1769846545.6684] checkpoint[0x55c1ea6c1950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 03:02:25 np0005603663 NetworkManager[49054]: <info>  [1769846545.6686] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 03:02:25 np0005603663 NetworkManager[49054]: <info>  [1769846545.9763] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 03:02:25 np0005603663 NetworkManager[49054]: <info>  [1769846545.9779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 03:02:26 np0005603663 NetworkManager[49054]: <info>  [1769846546.1851] audit: op="networking-control" arg="global-dns-configuration" pid=51833 uid=0 result="success"
Jan 31 03:02:26 np0005603663 NetworkManager[49054]: <info>  [1769846546.1881] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 03:02:26 np0005603663 NetworkManager[49054]: <info>  [1769846546.1909] audit: op="networking-control" arg="global-dns-configuration" pid=51833 uid=0 result="success"
Jan 31 03:02:26 np0005603663 NetworkManager[49054]: <info>  [1769846546.1936] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 03:02:26 np0005603663 python3.9[52192]: ansible-ansible.legacy.async_status Invoked with jid=j484590135125.51827 mode=status _async_dir=/root/.ansible_async
Jan 31 03:02:26 np0005603663 NetworkManager[49054]: <info>  [1769846546.3590] checkpoint[0x55c1ea6c1a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 03:02:26 np0005603663 NetworkManager[49054]: <info>  [1769846546.3596] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 03:02:26 np0005603663 ansible-async_wrapper.py[51831]: Module complete (51831)
Jan 31 03:02:27 np0005603663 ansible-async_wrapper.py[51830]: Done in kid B.
Jan 31 03:02:29 np0005603663 python3.9[52296]: ansible-ansible.legacy.async_status Invoked with jid=j484590135125.51827 mode=status _async_dir=/root/.ansible_async
Jan 31 03:02:30 np0005603663 python3.9[52396]: ansible-ansible.legacy.async_status Invoked with jid=j484590135125.51827 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 03:02:30 np0005603663 python3.9[52548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:02:31 np0005603663 python3.9[52671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846550.404236-317-25361877950196/.source.returncode _original_basename=.n0ksc5pc follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:31 np0005603663 python3.9[52823]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:02:32 np0005603663 python3.9[52946]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846551.5032341-333-173049882578869/.source.cfg _original_basename=.bpvcphoh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:33 np0005603663 python3.9[53099]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:02:33 np0005603663 systemd[1]: Reloading Network Manager...
Jan 31 03:02:33 np0005603663 NetworkManager[49054]: <info>  [1769846553.4826] audit: op="reload" arg="0" pid=53103 uid=0 result="success"
Jan 31 03:02:33 np0005603663 NetworkManager[49054]: <info>  [1769846553.4839] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 03:02:33 np0005603663 systemd[1]: Reloaded Network Manager.
Jan 31 03:02:33 np0005603663 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 03:02:33 np0005603663 systemd[1]: session-10.scope: Consumed 44.019s CPU time.
Jan 31 03:02:33 np0005603663 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Jan 31 03:02:33 np0005603663 systemd-logind[793]: Removed session 10.
Jan 31 03:02:34 np0005603663 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 03:02:39 np0005603663 systemd-logind[793]: New session 11 of user zuul.
Jan 31 03:02:39 np0005603663 systemd[1]: Started Session 11 of User zuul.
Jan 31 03:02:40 np0005603663 python3.9[53289]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:02:41 np0005603663 python3.9[53443]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:02:42 np0005603663 python3.9[53637]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:02:43 np0005603663 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 03:02:43 np0005603663 systemd[1]: session-11.scope: Consumed 2.134s CPU time.
Jan 31 03:02:43 np0005603663 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Jan 31 03:02:43 np0005603663 systemd-logind[793]: Removed session 11.
Jan 31 03:02:43 np0005603663 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 03:02:49 np0005603663 systemd-logind[793]: New session 12 of user zuul.
Jan 31 03:02:49 np0005603663 systemd[1]: Started Session 12 of User zuul.
Jan 31 03:02:50 np0005603663 python3.9[53820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:02:51 np0005603663 python3.9[53974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:02:52 np0005603663 python3.9[54130]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:02:52 np0005603663 python3.9[54215]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:02:54 np0005603663 python3.9[54368]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:02:55 np0005603663 python3.9[54564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:56 np0005603663 python3.9[54716]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:02:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-compat440514473-merged.mount: Deactivated successfully.
Jan 31 03:02:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck4089264200-merged.mount: Deactivated successfully.
Jan 31 03:02:56 np0005603663 podman[54717]: 2026-01-31 08:02:56.445443642 +0000 UTC m=+0.050421303 system refresh
Jan 31 03:02:57 np0005603663 python3.9[54878]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:02:57 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:02:57 np0005603663 python3.9[55001]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846576.6018455-74-167306454914483/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a0d0de64980f92bd74a85cb943f41651d4ddd4a2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:02:58 np0005603663 python3.9[55153]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:02:58 np0005603663 python3.9[55276]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846578.0274599-89-30705589071548/.source.conf follow=False _original_basename=registries.conf.j2 checksum=fb9ecd0f69b71ff4fcaafa5405e2d3d2be108c65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:02:59 np0005603663 python3.9[55428]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:00 np0005603663 python3.9[55580]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:00 np0005603663 python3.9[55732]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:01 np0005603663 python3.9[55884]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:02 np0005603663 python3.9[56036]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:03:04 np0005603663 python3.9[56189]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:03:05 np0005603663 python3.9[56343]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:03:05 np0005603663 python3.9[56495]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:03:06 np0005603663 python3.9[56647]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:03:07 np0005603663 python3.9[56800]: ansible-service_facts Invoked
Jan 31 03:03:07 np0005603663 network[56817]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:03:07 np0005603663 network[56818]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:03:07 np0005603663 network[56819]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:03:11 np0005603663 python3.9[57271]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:03:13 np0005603663 python3.9[57424]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 03:03:14 np0005603663 python3.9[57576]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:15 np0005603663 python3.9[57701]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846594.2738013-233-73186446001738/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:16 np0005603663 python3.9[57855]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:16 np0005603663 python3.9[57980]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846595.6311169-248-51889708224660/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:17 np0005603663 python3.9[58134]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:18 np0005603663 python3.9[58288]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:03:20 np0005603663 python3.9[58372]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:03:21 np0005603663 python3.9[58526]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:03:21 np0005603663 python3.9[58610]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:03:21 np0005603663 chronyd[792]: chronyd exiting
Jan 31 03:03:21 np0005603663 systemd[1]: Stopping NTP client/server...
Jan 31 03:03:21 np0005603663 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 03:03:21 np0005603663 systemd[1]: Stopped NTP client/server.
Jan 31 03:03:21 np0005603663 systemd[1]: Starting NTP client/server...
Jan 31 03:03:22 np0005603663 chronyd[58619]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 03:03:22 np0005603663 chronyd[58619]: Frequency -28.532 +/- 0.471 ppm read from /var/lib/chrony/drift
Jan 31 03:03:22 np0005603663 chronyd[58619]: Loaded seccomp filter (level 2)
Jan 31 03:03:22 np0005603663 systemd[1]: Started NTP client/server.
Jan 31 03:03:22 np0005603663 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 03:03:22 np0005603663 systemd[1]: session-12.scope: Consumed 23.188s CPU time.
Jan 31 03:03:22 np0005603663 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Jan 31 03:03:22 np0005603663 systemd-logind[793]: Removed session 12.
Jan 31 03:03:29 np0005603663 systemd-logind[793]: New session 13 of user zuul.
Jan 31 03:03:29 np0005603663 systemd[1]: Started Session 13 of User zuul.
Jan 31 03:03:30 np0005603663 python3.9[58800]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:30 np0005603663 python3.9[58952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:31 np0005603663 python3.9[59075]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846610.3053641-29-162700165575738/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:31 np0005603663 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 03:03:31 np0005603663 systemd[1]: session-13.scope: Consumed 1.525s CPU time.
Jan 31 03:03:31 np0005603663 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Jan 31 03:03:31 np0005603663 systemd-logind[793]: Removed session 13.
Jan 31 03:03:37 np0005603663 systemd-logind[793]: New session 14 of user zuul.
Jan 31 03:03:37 np0005603663 systemd[1]: Started Session 14 of User zuul.
Jan 31 03:03:38 np0005603663 python3.9[59253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:03:39 np0005603663 python3.9[59409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:40 np0005603663 python3.9[59584]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:40 np0005603663 python3.9[59707]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769846619.431571-36-212876184170098/.source.json _original_basename=.wgu1uijb follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:41 np0005603663 python3.9[59859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:42 np0005603663 python3.9[59982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846621.3824642-59-43092221843208/.source _original_basename=.wpy1veem follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:43 np0005603663 python3.9[60134]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:43 np0005603663 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:44 np0005603663 python3.9[60409]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846623.4673817-83-75942054987205/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:45 np0005603663 python3.9[60561]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:45 np0005603663 python3.9[60684]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846624.6211345-83-207611462919307/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:03:46 np0005603663 python3.9[60836]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:46 np0005603663 python3.9[60988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:47 np0005603663 python3.9[61111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846626.3524013-120-182153191120037/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:47 np0005603663 python3.9[61263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:48 np0005603663 python3.9[61386]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846627.5183442-135-113327242731834/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:49 np0005603663 python3.9[61538]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:03:49 np0005603663 systemd[1]: Reloading.
Jan 31 03:03:49 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:03:49 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:03:49 np0005603663 systemd[1]: Reloading.
Jan 31 03:03:49 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:03:49 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:03:49 np0005603663 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 03:03:49 np0005603663 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 03:03:50 np0005603663 python3.9[61764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:50 np0005603663 python3.9[61887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846630.008669-158-229850904310803/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:51 np0005603663 python3.9[62039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:03:52 np0005603663 python3.9[62162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846631.1145935-173-139390245400861/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:03:52 np0005603663 python3.9[62314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:03:52 np0005603663 systemd[1]: Reloading.
Jan 31 03:03:52 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:03:52 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:03:52 np0005603663 systemd[1]: Reloading.
Jan 31 03:03:52 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:03:52 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:03:53 np0005603663 systemd[1]: Starting Create netns directory...
Jan 31 03:03:53 np0005603663 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 03:03:53 np0005603663 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 03:03:53 np0005603663 systemd[1]: Finished Create netns directory.
Jan 31 03:03:53 np0005603663 python3.9[62541]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:03:53 np0005603663 network[62558]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:03:53 np0005603663 network[62559]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:03:53 np0005603663 network[62560]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:03:56 np0005603663 python3.9[62822]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:03:56 np0005603663 systemd[1]: Reloading.
Jan 31 03:03:56 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:03:56 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:03:57 np0005603663 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 03:03:57 np0005603663 iptables.init[62861]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 03:03:57 np0005603663 iptables.init[62861]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 03:03:57 np0005603663 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 03:03:57 np0005603663 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 03:03:58 np0005603663 python3.9[63057]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:03:59 np0005603663 python3.9[63211]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:03:59 np0005603663 systemd[1]: Reloading.
Jan 31 03:03:59 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:03:59 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:03:59 np0005603663 systemd[1]: Starting Netfilter Tables...
Jan 31 03:03:59 np0005603663 systemd[1]: Finished Netfilter Tables.
Jan 31 03:04:00 np0005603663 python3.9[63403]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:01 np0005603663 python3.9[63556]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:01 np0005603663 python3.9[63681]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846640.8543825-242-237684181591787/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:02 np0005603663 python3.9[63834]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:04:02 np0005603663 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 03:04:02 np0005603663 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 03:04:03 np0005603663 python3.9[63990]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:03 np0005603663 python3.9[64142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:04 np0005603663 python3.9[64265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846643.3404284-273-165664477151750/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:05 np0005603663 python3.9[64417]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 03:04:05 np0005603663 systemd[1]: Starting Time & Date Service...
Jan 31 03:04:05 np0005603663 systemd[1]: Started Time & Date Service.
Jan 31 03:04:05 np0005603663 python3.9[64573]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:06 np0005603663 python3.9[64725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:07 np0005603663 python3.9[64848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846646.0832753-308-32194648216786/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:07 np0005603663 python3.9[65000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:08 np0005603663 python3.9[65123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846647.2067745-323-93897081966963/.source.yaml _original_basename=.rfkq3vex follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:08 np0005603663 python3.9[65275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:09 np0005603663 python3.9[65398]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846648.2833638-338-973658554962/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:09 np0005603663 python3.9[65550]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:10 np0005603663 python3.9[65703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:11 np0005603663 python3[65856]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 03:04:11 np0005603663 python3.9[66008]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:12 np0005603663 python3.9[66131]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846651.5271509-377-39972684576228/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:12 np0005603663 python3.9[66283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:13 np0005603663 python3.9[66406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846652.5353765-392-19829098134456/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:13 np0005603663 python3.9[66558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:14 np0005603663 python3.9[66681]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846653.5433662-407-225303096826217/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:14 np0005603663 python3.9[66833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:15 np0005603663 python3.9[66956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846654.5986328-422-78700857142203/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:16 np0005603663 python3.9[67108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:04:16 np0005603663 python3.9[67231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846655.578394-437-89056132597835/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:17 np0005603663 python3.9[67383]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:17 np0005603663 python3.9[67535]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:18 np0005603663 python3.9[67694]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:19 np0005603663 python3.9[67847]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:19 np0005603663 python3.9[67999]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:20 np0005603663 python3.9[68151]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 03:04:21 np0005603663 python3.9[68304]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 03:04:21 np0005603663 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 03:04:21 np0005603663 systemd[1]: session-14.scope: Consumed 31.446s CPU time.
Jan 31 03:04:21 np0005603663 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Jan 31 03:04:21 np0005603663 systemd-logind[793]: Removed session 14.
Jan 31 03:04:26 np0005603663 systemd-logind[793]: New session 15 of user zuul.
Jan 31 03:04:26 np0005603663 systemd[1]: Started Session 15 of User zuul.
Jan 31 03:04:27 np0005603663 python3.9[68485]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 03:04:28 np0005603663 python3.9[68637]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:04:29 np0005603663 python3.9[68789]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:04:29 np0005603663 python3.9[68941]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWE2JVgZg7/u8eKJOhyXjs2p2Qt39hyygdPIhluejh1YW6dcdEylP4WBj6s+q3E0jylhkLknf3rSZ3V/k+1w4fdSUak8G4nLiV+h7jI0m37zoSEXpQABHGJkpgi2eMs0YNEF9ZbgIO31d28SspBpNxFqovrMK9sOzJD3jRaR2TV2FGV4csI4Je0LNdEV2NmeRljWtF7PlqQKs424iGvqmWC0B3yHCfBTNvXWNKzGR1N9odg9DQrU9iQl+1eRKkj6BTvJgzpUrsqny5n8vohkDGBUxN/PXOEp7pqhuJUPSphsqmLwQwrLfwDu7A7dJJfZkVKkpzZyD6doTBm0NvOOS1P7M8/iclLU1KEYLp51WWXc+cX67skjn1vfDJa7CGV5YlXA3q5QP5xqR6eDbptMG7KpRBt6sSG7A44KIXdmzbWGFuBJYi0sjVIDfXPkfJOcwxwUzMotpbCYCDOV94CS6XESh8ZKogwpuB8qVCTqZEJz/qxAkpdL1xxLZ6iM3SA2k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBdV4ImCUSap74vh7n2NTRmfyoKbp4X6QTOOZaAU/4X4#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNKN9rH1fl1KXYyt+swOzNYmow6bIvU77b90jfMS4wXtyUATZdas4vlUZ46SayVV+s+nKQQloJFhgnR/5ots9Yc=#012 create=True mode=0644 path=/tmp/ansible.gos1m2ug state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:30 np0005603663 python3.9[69093]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gos1m2ug' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:31 np0005603663 python3.9[69247]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gos1m2ug state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:31 np0005603663 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 03:04:31 np0005603663 systemd[1]: session-15.scope: Consumed 3.101s CPU time.
Jan 31 03:04:31 np0005603663 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Jan 31 03:04:31 np0005603663 systemd-logind[793]: Removed session 15.
Jan 31 03:04:35 np0005603663 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 03:04:37 np0005603663 systemd-logind[793]: New session 16 of user zuul.
Jan 31 03:04:37 np0005603663 systemd[1]: Started Session 16 of User zuul.
Jan 31 03:04:38 np0005603663 python3.9[69427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:04:39 np0005603663 python3.9[69583]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 03:04:40 np0005603663 python3.9[69737]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:04:41 np0005603663 python3.9[69890]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:42 np0005603663 python3.9[70043]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:04:43 np0005603663 python3.9[70197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:04:44 np0005603663 python3.9[70352]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:04:44 np0005603663 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 03:04:44 np0005603663 systemd[1]: session-16.scope: Consumed 4.289s CPU time.
Jan 31 03:04:44 np0005603663 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Jan 31 03:04:44 np0005603663 systemd-logind[793]: Removed session 16.
Jan 31 03:04:56 np0005603663 systemd-logind[793]: New session 17 of user zuul.
Jan 31 03:04:56 np0005603663 systemd[1]: Started Session 17 of User zuul.
Jan 31 03:04:57 np0005603663 python3.9[70530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:04:58 np0005603663 python3.9[70686]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:04:59 np0005603663 python3.9[70770]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 03:05:00 np0005603663 python3.9[70921]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:02 np0005603663 python3.9[71072]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 03:05:02 np0005603663 python3.9[71222]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:05:02 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:05:02 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:05:03 np0005603663 python3.9[71373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:05:03 np0005603663 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 03:05:03 np0005603663 systemd[1]: session-17.scope: Consumed 5.574s CPU time.
Jan 31 03:05:03 np0005603663 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Jan 31 03:05:03 np0005603663 systemd-logind[793]: Removed session 17.
Jan 31 03:05:11 np0005603663 systemd-logind[793]: New session 18 of user zuul.
Jan 31 03:05:11 np0005603663 systemd[1]: Started Session 18 of User zuul.
Jan 31 03:05:16 np0005603663 python3[72139]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:05:18 np0005603663 python3[72234]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 03:05:19 np0005603663 python3[72261]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:19 np0005603663 python3[72287]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:19 np0005603663 kernel: loop: module loaded
Jan 31 03:05:19 np0005603663 kernel: loop3: detected capacity change from 0 to 41943040
Jan 31 03:05:20 np0005603663 python3[72322]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:20 np0005603663 lvm[72325]: PV /dev/loop3 not used.
Jan 31 03:05:20 np0005603663 lvm[72327]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:05:20 np0005603663 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 03:05:20 np0005603663 lvm[72331]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 03:05:20 np0005603663 lvm[72337]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:05:20 np0005603663 lvm[72337]: VG ceph_vg0 finished
Jan 31 03:05:20 np0005603663 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 03:05:20 np0005603663 python3[72415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:05:21 np0005603663 python3[72488]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846720.5403633-36393-214961255597074/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:21 np0005603663 python3[72538]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:05:21 np0005603663 systemd[1]: Reloading.
Jan 31 03:05:21 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:05:21 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:05:22 np0005603663 systemd[1]: Starting Ceph OSD losetup...
Jan 31 03:05:22 np0005603663 bash[72580]: /dev/loop3: [64513]:4329562 (/var/lib/ceph-osd-0.img)
Jan 31 03:05:22 np0005603663 systemd[1]: Finished Ceph OSD losetup.
Jan 31 03:05:22 np0005603663 lvm[72581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:05:22 np0005603663 lvm[72581]: VG ceph_vg0 finished
Jan 31 03:05:22 np0005603663 python3[72607]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 03:05:23 np0005603663 python3[72634]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:24 np0005603663 python3[72660]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:24 np0005603663 kernel: loop4: detected capacity change from 0 to 41943040
Jan 31 03:05:24 np0005603663 python3[72692]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:24 np0005603663 lvm[72695]: PV /dev/loop4 not used.
Jan 31 03:05:24 np0005603663 lvm[72697]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:05:24 np0005603663 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 31 03:05:24 np0005603663 lvm[72700]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 31 03:05:24 np0005603663 lvm[72707]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:05:24 np0005603663 lvm[72707]: VG ceph_vg1 finished
Jan 31 03:05:24 np0005603663 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 31 03:05:25 np0005603663 python3[72785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:05:25 np0005603663 python3[72858]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846724.97736-36420-213522309033381/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:26 np0005603663 python3[72908]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:05:26 np0005603663 systemd[1]: Reloading.
Jan 31 03:05:26 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:05:26 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:05:26 np0005603663 systemd[1]: Starting Ceph OSD losetup...
Jan 31 03:05:26 np0005603663 bash[72948]: /dev/loop4: [64513]:4355723 (/var/lib/ceph-osd-1.img)
Jan 31 03:05:26 np0005603663 systemd[1]: Finished Ceph OSD losetup.
Jan 31 03:05:26 np0005603663 lvm[72949]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:05:26 np0005603663 lvm[72949]: VG ceph_vg1 finished
Jan 31 03:05:26 np0005603663 python3[72975]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 03:05:28 np0005603663 python3[73002]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:28 np0005603663 python3[73028]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:28 np0005603663 kernel: loop5: detected capacity change from 0 to 41943040
Jan 31 03:05:29 np0005603663 python3[73060]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:29 np0005603663 lvm[73063]: PV /dev/loop5 not used.
Jan 31 03:05:29 np0005603663 lvm[73065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:05:29 np0005603663 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 31 03:05:29 np0005603663 lvm[73076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:05:29 np0005603663 lvm[73076]: VG ceph_vg2 finished
Jan 31 03:05:29 np0005603663 lvm[73073]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 31 03:05:29 np0005603663 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 31 03:05:29 np0005603663 python3[73154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:05:30 np0005603663 python3[73227]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846729.4119883-36447-227531220003309/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:30 np0005603663 python3[73277]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:05:30 np0005603663 systemd[1]: Reloading.
Jan 31 03:05:30 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:05:30 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:05:30 np0005603663 systemd[1]: Starting Ceph OSD losetup...
Jan 31 03:05:30 np0005603663 bash[73316]: /dev/loop5: [64513]:4355725 (/var/lib/ceph-osd-2.img)
Jan 31 03:05:30 np0005603663 systemd[1]: Finished Ceph OSD losetup.
Jan 31 03:05:30 np0005603663 lvm[73317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:05:30 np0005603663 lvm[73317]: VG ceph_vg2 finished
Jan 31 03:05:32 np0005603663 chronyd[58619]: Selected source 23.133.168.247 (pool.ntp.org)
Jan 31 03:05:32 np0005603663 python3[73341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:05:34 np0005603663 python3[73434]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 03:05:38 np0005603663 python3[73491]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 03:05:43 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:05:43 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:05:44 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:05:44 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:05:44 np0005603663 systemd[1]: run-r64c7e20c282348a1a1a85ae7d09131ec.service: Deactivated successfully.
Jan 31 03:05:44 np0005603663 python3[73610]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:45 np0005603663 python3[73638]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:45 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:05:45 np0005603663 python3[73678]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:46 np0005603663 python3[73704]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:46 np0005603663 python3[73782]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:05:47 np0005603663 python3[73855]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846746.6134808-36595-24726983862530/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:47 np0005603663 python3[73957]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:05:48 np0005603663 python3[74030]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846747.6756558-36613-281222421558314/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:05:48 np0005603663 python3[74080]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:49 np0005603663 python3[74108]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:49 np0005603663 python3[74136]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:49 np0005603663 python3[74162]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:05:50 np0005603663 python3[74188]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:05:50 np0005603663 systemd-logind[793]: New session 19 of user ceph-admin.
Jan 31 03:05:50 np0005603663 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 03:05:50 np0005603663 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 03:05:50 np0005603663 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 03:05:50 np0005603663 systemd[1]: Starting User Manager for UID 42477...
Jan 31 03:05:50 np0005603663 systemd[74196]: Queued start job for default target Main User Target.
Jan 31 03:05:50 np0005603663 systemd[74196]: Created slice User Application Slice.
Jan 31 03:05:50 np0005603663 systemd[74196]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:05:50 np0005603663 systemd[74196]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:05:50 np0005603663 systemd[74196]: Reached target Paths.
Jan 31 03:05:50 np0005603663 systemd[74196]: Reached target Timers.
Jan 31 03:05:50 np0005603663 systemd[74196]: Starting D-Bus User Message Bus Socket...
Jan 31 03:05:50 np0005603663 systemd[74196]: Starting Create User's Volatile Files and Directories...
Jan 31 03:05:50 np0005603663 systemd[74196]: Finished Create User's Volatile Files and Directories.
Jan 31 03:05:50 np0005603663 systemd[74196]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:05:50 np0005603663 systemd[74196]: Reached target Sockets.
Jan 31 03:05:50 np0005603663 systemd[74196]: Reached target Basic System.
Jan 31 03:05:50 np0005603663 systemd[74196]: Reached target Main User Target.
Jan 31 03:05:50 np0005603663 systemd[74196]: Startup finished in 140ms.
Jan 31 03:05:50 np0005603663 systemd[1]: Started User Manager for UID 42477.
Jan 31 03:05:50 np0005603663 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 03:05:50 np0005603663 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 03:05:50 np0005603663 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Jan 31 03:05:50 np0005603663 systemd-logind[793]: Removed session 19.
Jan 31 03:05:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:05:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:05:52 np0005603663 systemd[1]: var-lib-containers-storage-overlay-compat913069277-merged.mount: Deactivated successfully.
Jan 31 03:05:53 np0005603663 systemd[1]: var-lib-containers-storage-overlay-compat913069277-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 03:06:00 np0005603663 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 03:06:00 np0005603663 systemd[74196]: Activating special unit Exit the Session...
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped target Main User Target.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped target Basic System.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped target Paths.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped target Sockets.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped target Timers.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:06:00 np0005603663 systemd[74196]: Closed D-Bus User Message Bus Socket.
Jan 31 03:06:00 np0005603663 systemd[74196]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:06:00 np0005603663 systemd[74196]: Removed slice User Application Slice.
Jan 31 03:06:00 np0005603663 systemd[74196]: Reached target Shutdown.
Jan 31 03:06:00 np0005603663 systemd[74196]: Finished Exit the Session.
Jan 31 03:06:00 np0005603663 systemd[74196]: Reached target Exit the Session.
Jan 31 03:06:00 np0005603663 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 03:06:00 np0005603663 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 03:06:00 np0005603663 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 03:06:00 np0005603663 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 03:06:00 np0005603663 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 03:06:00 np0005603663 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 03:06:00 np0005603663 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 03:06:08 np0005603663 podman[74290]: 2026-01-31 08:06:08.099106432 +0000 UTC m=+17.113487475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.157332784 +0000 UTC m=+0.038613963 container create 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:06:08 np0005603663 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 03:06:08 np0005603663 systemd[1]: Started libpod-conmon-87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111.scope.
Jan 31 03:06:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.235506274 +0000 UTC m=+0.116787503 container init 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.139105363 +0000 UTC m=+0.020386522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.244666305 +0000 UTC m=+0.125947484 container start 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.248629868 +0000 UTC m=+0.129911117 container attach 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:06:08 np0005603663 elastic_heisenberg[74367]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.343575757 +0000 UTC m=+0.224856896 container died 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:06:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6caee2d88ccc528950b32865eacd9cd7652eed77bba0a31065347f0a36f5e901-merged.mount: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74352]: 2026-01-31 08:06:08.383089405 +0000 UTC m=+0.264370544 container remove 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-conmon-87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.437137207 +0000 UTC m=+0.031921402 container create adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:08 np0005603663 systemd[1]: Started libpod-conmon-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope.
Jan 31 03:06:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.493243758 +0000 UTC m=+0.088027973 container init adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.498464736 +0000 UTC m=+0.093248971 container start adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:06:08 np0005603663 funny_kowalevski[74401]: 167 167
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 conmon[74401]: conmon adff60186a5cad5b26eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope/container/memory.events
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.502630075 +0000 UTC m=+0.097414290 container attach adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.503171791 +0000 UTC m=+0.097955996 container died adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.422424557 +0000 UTC m=+0.017208772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:08 np0005603663 podman[74384]: 2026-01-31 08:06:08.54870786 +0000 UTC m=+0.143492095 container remove adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-conmon-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.621097465 +0000 UTC m=+0.051180411 container create 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:08 np0005603663 systemd[1]: Started libpod-conmon-25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7.scope.
Jan 31 03:06:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.686767369 +0000 UTC m=+0.116850405 container init 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.693857791 +0000 UTC m=+0.123940737 container start 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.601110515 +0000 UTC m=+0.031193471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.697599438 +0000 UTC m=+0.127682384 container attach 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 03:06:08 np0005603663 nice_wu[74434]: AQDwt31pqzpvKhAAwCq/IaEW0vIW+tDr7dDTpQ==
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.714223022 +0000 UTC m=+0.144305988 container died 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:08 np0005603663 podman[74418]: 2026-01-31 08:06:08.762488019 +0000 UTC m=+0.192570995 container remove 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-conmon-25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74453]: 2026-01-31 08:06:08.81823689 +0000 UTC m=+0.039835058 container create 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:06:08 np0005603663 systemd[1]: Started libpod-conmon-4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531.scope.
Jan 31 03:06:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:08 np0005603663 podman[74453]: 2026-01-31 08:06:08.870551663 +0000 UTC m=+0.092149851 container init 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:06:08 np0005603663 podman[74453]: 2026-01-31 08:06:08.8739728 +0000 UTC m=+0.095570968 container start 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:06:08 np0005603663 boring_mendeleev[74470]: AQDwt31pAm35NBAAi6sd/vujGwQEztDRC4JBcg==
Jan 31 03:06:08 np0005603663 systemd[1]: libpod-4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531.scope: Deactivated successfully.
Jan 31 03:06:08 np0005603663 podman[74453]: 2026-01-31 08:06:08.802239514 +0000 UTC m=+0.023837702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:08 np0005603663 podman[74453]: 2026-01-31 08:06:08.937848633 +0000 UTC m=+0.159446801 container attach 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:08 np0005603663 podman[74453]: 2026-01-31 08:06:08.938304156 +0000 UTC m=+0.159902334 container died 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:06:09 np0005603663 podman[74453]: 2026-01-31 08:06:09.056834847 +0000 UTC m=+0.278433055 container remove 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-conmon-4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.115920333 +0000 UTC m=+0.043409709 container create 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:09 np0005603663 systemd[1]: Started libpod-conmon-5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b.scope.
Jan 31 03:06:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.165318473 +0000 UTC m=+0.092807849 container init 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.173228978 +0000 UTC m=+0.100718364 container start 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.178270242 +0000 UTC m=+0.105759618 container attach 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:06:09 np0005603663 priceless_easley[74506]: AQDxt31pO1lCCxAA+A+ZdrL5Mu1L29mq8hUmxQ==
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.093803782 +0000 UTC m=+0.021293208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.19116593 +0000 UTC m=+0.118655316 container died 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:06:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-22183a456f3e5aa264afdc08381e8efb5db60695eb8338f38fa200c9d3d21eaf-merged.mount: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74489]: 2026-01-31 08:06:09.224867342 +0000 UTC m=+0.152356688 container remove 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:06:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:06:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-conmon-5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.282413703 +0000 UTC m=+0.038698575 container create 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 03:06:09 np0005603663 systemd[1]: Started libpod-conmon-6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533.scope.
Jan 31 03:06:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7052a080da12557d25557e1b4f4e4847d22dd554ddec371bad6f88e4fd740a7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.265143211 +0000 UTC m=+0.021428133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.379304148 +0000 UTC m=+0.135589050 container init 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.385425602 +0000 UTC m=+0.141710484 container start 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.400755899 +0000 UTC m=+0.157040781 container attach 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:06:09 np0005603663 goofy_knuth[74539]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 03:06:09 np0005603663 goofy_knuth[74539]: setting min_mon_release = tentacle
Jan 31 03:06:09 np0005603663 goofy_knuth[74539]: /usr/bin/monmaptool: set fsid to 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:09 np0005603663 goofy_knuth[74539]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.432633368 +0000 UTC m=+0.188918250 container died 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:06:09 np0005603663 podman[74523]: 2026-01-31 08:06:09.464524108 +0000 UTC m=+0.220809000 container remove 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-conmon-6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.530094339 +0000 UTC m=+0.050740369 container create fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:06:09 np0005603663 systemd[1]: Started libpod-conmon-fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78.scope.
Jan 31 03:06:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.592038587 +0000 UTC m=+0.112684667 container init fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.502644196 +0000 UTC m=+0.023290256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.599013876 +0000 UTC m=+0.119659896 container start fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.603397991 +0000 UTC m=+0.124044051 container attach fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.704355561 +0000 UTC m=+0.225001611 container died fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:06:09 np0005603663 podman[74560]: 2026-01-31 08:06:09.753535254 +0000 UTC m=+0.274181304 container remove fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:06:09 np0005603663 systemd[1]: libpod-conmon-fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78.scope: Deactivated successfully.
Jan 31 03:06:09 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:09 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:09 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:10 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:10 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:10 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:10 np0005603663 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 03:06:10 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:10 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:10 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:10 np0005603663 systemd[1]: Reached target Ceph cluster 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:10 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:10 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:10 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:10 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:10 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:10 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:11 np0005603663 systemd[1]: Created slice Slice /system/ceph-82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:11 np0005603663 systemd[1]: Reached target System Time Set.
Jan 31 03:06:11 np0005603663 systemd[1]: Reached target System Time Synchronized.
Jan 31 03:06:11 np0005603663 systemd[1]: Starting Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:06:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:06:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:06:11 np0005603663 podman[74854]: 2026-01-31 08:06:11.191210993 +0000 UTC m=+0.043648006 container create f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 podman[74854]: 2026-01-31 08:06:11.24472951 +0000 UTC m=+0.097166513 container init f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:11 np0005603663 podman[74854]: 2026-01-31 08:06:11.252159342 +0000 UTC m=+0.104596325 container start f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:11 np0005603663 bash[74854]: f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410
Jan 31 03:06:11 np0005603663 podman[74854]: 2026-01-31 08:06:11.170201834 +0000 UTC m=+0.022638847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:11 np0005603663 systemd[1]: Started Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: pidfile_write: ignore empty --pid-file
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: load: jerasure load: lrc 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Git sha 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: DB SUMMARY
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: DB Session ID:  6H349LA39CZV4Z01SNE0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                                     Options.env: 0x55fed842c440
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                                Options.info_log: 0x55fed9b1b3e0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                                 Options.wal_dir: 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                    Options.write_buffer_manager: 0x55fed9a9a140
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                               Options.row_cache: None
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                              Options.wal_filter: None
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.wal_compression: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.max_background_jobs: 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Compression algorithms supported:
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kZSTD supported: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:           Options.merge_operator: 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:        Options.compaction_filter: None
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fed9aa6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fed9a8b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.compression: NoCompression
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.num_levels: 7
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 91992687-9ca4-489a-811f-a25b3432622d
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846771295993, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846771298133, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "6H349LA39CZV4Z01SNE0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846771298230, "job": 1, "event": "recovery_finished"}
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fed9ab8e00
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: DB pointer 0x55fed9c04000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fed9a8b8d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@-1(???) e0 preinit fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T08:06:09.429767+0000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : created 2026-01-31T08:06:09.429767+0000
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T08:06:09.636428Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.332603737 +0000 UTC m=+0.045295503 container create 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).mds e1 new map
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-31T08:06:11:330734+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mkfs 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 03:06:11 np0005603663 systemd[1]: Started libpod-conmon-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope.
Jan 31 03:06:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.316146888 +0000 UTC m=+0.028838654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.424268142 +0000 UTC m=+0.136959928 container init 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.431598252 +0000 UTC m=+0.144290038 container start 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.43820213 +0000 UTC m=+0.150893916 container attach 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 03:06:11 np0005603663 ceph-mon[74874]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889949626' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:  cluster:
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    id:     82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    health: HEALTH_OK
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]: 
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:  services:
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    mon: 1 daemons, quorum compute-0 (age 0.30607s) [leader: compute-0]
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    mgr: no daemons active
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    osd: 0 osds: 0 up, 0 in
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]: 
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:  data:
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    pools:   0 pools, 0 pgs
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    objects: 0 objects, 0 B
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    usage:   0 B used, 0 B / 0 B avail
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]:    pgs:     
Jan 31 03:06:11 np0005603663 festive_lumiere[74929]: 
Jan 31 03:06:11 np0005603663 systemd[1]: libpod-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope: Deactivated successfully.
Jan 31 03:06:11 np0005603663 conmon[74929]: conmon 54e821b7956b175c66df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope/container/memory.events
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.654184042 +0000 UTC m=+0.366875808 container died 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892-merged.mount: Deactivated successfully.
Jan 31 03:06:11 np0005603663 podman[74875]: 2026-01-31 08:06:11.689709926 +0000 UTC m=+0.402401692 container remove 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:11 np0005603663 systemd[1]: libpod-conmon-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope: Deactivated successfully.
Jan 31 03:06:11 np0005603663 podman[74966]: 2026-01-31 08:06:11.748823212 +0000 UTC m=+0.040220988 container create 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:06:11 np0005603663 systemd[1]: Started libpod-conmon-8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b.scope.
Jan 31 03:06:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:11 np0005603663 podman[74966]: 2026-01-31 08:06:11.818198572 +0000 UTC m=+0.109596378 container init 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:11 np0005603663 podman[74966]: 2026-01-31 08:06:11.822900466 +0000 UTC m=+0.114298262 container start 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:11 np0005603663 podman[74966]: 2026-01-31 08:06:11.729793599 +0000 UTC m=+0.021191445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:11 np0005603663 podman[74966]: 2026-01-31 08:06:11.826349254 +0000 UTC m=+0.117747110 container attach 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 03:06:12 np0005603663 angry_blackwell[74982]: 
Jan 31 03:06:12 np0005603663 angry_blackwell[74982]: [global]
Jan 31 03:06:12 np0005603663 angry_blackwell[74982]: #011fsid = 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:12 np0005603663 angry_blackwell[74982]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 03:06:12 np0005603663 angry_blackwell[74982]: #011osd_crush_chooseleaf_type = 0
Jan 31 03:06:12 np0005603663 systemd[1]: libpod-8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b.scope: Deactivated successfully.
Jan 31 03:06:12 np0005603663 podman[74966]: 2026-01-31 08:06:12.06523838 +0000 UTC m=+0.356636186 container died 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3-merged.mount: Deactivated successfully.
Jan 31 03:06:12 np0005603663 podman[74966]: 2026-01-31 08:06:12.098899871 +0000 UTC m=+0.390297667 container remove 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:06:12 np0005603663 systemd[1]: libpod-conmon-8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b.scope: Deactivated successfully.
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.15110096 +0000 UTC m=+0.037086569 container create 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:06:12 np0005603663 systemd[1]: Started libpod-conmon-4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62.scope.
Jan 31 03:06:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.133334173 +0000 UTC m=+0.019319872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.232006108 +0000 UTC m=+0.117991737 container init 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.236080984 +0000 UTC m=+0.122066643 container start 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.240208592 +0000 UTC m=+0.126194241 container attach 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999526819' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:12 np0005603663 systemd[1]: libpod-4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62.scope: Deactivated successfully.
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.441464384 +0000 UTC m=+0.327450043 container died 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:06:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5-merged.mount: Deactivated successfully.
Jan 31 03:06:12 np0005603663 podman[75019]: 2026-01-31 08:06:12.477295937 +0000 UTC m=+0.363281556 container remove 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:06:12 np0005603663 systemd[1]: libpod-conmon-4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62.scope: Deactivated successfully.
Jan 31 03:06:12 np0005603663 systemd[1]: Stopping Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: mon.compute-0@0(leader) e1 shutdown
Jan 31 03:06:12 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0[74870]: 2026-01-31T08:06:12.644+0000 7f7b5c911640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 03:06:12 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0[74870]: 2026-01-31T08:06:12.644+0000 7f7b5c911640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 03:06:12 np0005603663 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 03:06:12 np0005603663 podman[75103]: 2026-01-31 08:06:12.797661487 +0000 UTC m=+0.183322331 container died f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf-merged.mount: Deactivated successfully.
Jan 31 03:06:12 np0005603663 podman[75103]: 2026-01-31 08:06:12.829855156 +0000 UTC m=+0.215515980 container remove f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:12 np0005603663 bash[75103]: ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0
Jan 31 03:06:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 03:06:12 np0005603663 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mon.compute-0.service: Deactivated successfully.
Jan 31 03:06:12 np0005603663 systemd[1]: Stopped Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:12 np0005603663 systemd[1]: Starting Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:06:13 np0005603663 podman[75207]: 2026-01-31 08:06:13.145509211 +0000 UTC m=+0.047157137 container create 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 podman[75207]: 2026-01-31 08:06:13.121577778 +0000 UTC m=+0.023225744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:13 np0005603663 podman[75207]: 2026-01-31 08:06:13.224123354 +0000 UTC m=+0.125771290 container init 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:06:13 np0005603663 podman[75207]: 2026-01-31 08:06:13.232143692 +0000 UTC m=+0.133791588 container start 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:06:13 np0005603663 bash[75207]: 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a
Jan 31 03:06:13 np0005603663 systemd[1]: Started Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: pidfile_write: ignore empty --pid-file
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: load: jerasure load: lrc 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Git sha 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: DB SUMMARY
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: DB Session ID:  RDN3DWKE2K2I6QTJYIJY
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                                     Options.env: 0x55bf4a6e3440
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                                Options.info_log: 0x55bf4c749e80
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                                 Options.wal_dir: 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                    Options.write_buffer_manager: 0x55bf4c794140
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                               Options.row_cache: None
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                              Options.wal_filter: None
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.wal_compression: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.max_background_jobs: 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Compression algorithms supported:
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kZSTD supported: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:           Options.merge_operator: 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:        Options.compaction_filter: None
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bf4c7a0a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bf4c7858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.compression: NoCompression
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.num_levels: 7
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 91992687-9ca4-489a-811f-a25b3432622d
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846773282060, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846773286610, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846773, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846773286782, "job": 1, "event": "recovery_finished"}
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bf4c7b2e00
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: DB pointer 0x55bf4c8fc000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???) e1 preinit fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).mds e1 new map
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-31T08:06:11:330734+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.309567911 +0000 UTC m=+0.051177721 container create 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T08:06:09.429767+0000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : created 2026-01-31T08:06:09.429767+0000
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 03:06:13 np0005603663 systemd[1]: Started libpod-conmon-315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5.scope.
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 03:06:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.282031976 +0000 UTC m=+0.023641786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.388749821 +0000 UTC m=+0.130359661 container init 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.396834791 +0000 UTC m=+0.138444601 container start 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.402449011 +0000 UTC m=+0.144058821 container attach 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 31 03:06:13 np0005603663 systemd[1]: libpod-315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5.scope: Deactivated successfully.
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.598585107 +0000 UTC m=+0.340194877 container died 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:06:13 np0005603663 systemd[1]: var-lib-containers-storage-overlay-63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4-merged.mount: Deactivated successfully.
Jan 31 03:06:13 np0005603663 podman[75228]: 2026-01-31 08:06:13.636339985 +0000 UTC m=+0.377949755 container remove 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:06:13 np0005603663 systemd[1]: libpod-conmon-315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5.scope: Deactivated successfully.
Jan 31 03:06:13 np0005603663 podman[75320]: 2026-01-31 08:06:13.685460816 +0000 UTC m=+0.035594646 container create 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:13 np0005603663 systemd[1]: Started libpod-conmon-80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0.scope.
Jan 31 03:06:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:13 np0005603663 podman[75320]: 2026-01-31 08:06:13.760084865 +0000 UTC m=+0.110218665 container init 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:13 np0005603663 podman[75320]: 2026-01-31 08:06:13.764655926 +0000 UTC m=+0.114789716 container start 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:13 np0005603663 podman[75320]: 2026-01-31 08:06:13.669936793 +0000 UTC m=+0.020070603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:13 np0005603663 podman[75320]: 2026-01-31 08:06:13.76832655 +0000 UTC m=+0.118460390 container attach 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:06:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 31 03:06:14 np0005603663 systemd[1]: libpod-80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0.scope: Deactivated successfully.
Jan 31 03:06:14 np0005603663 podman[75320]: 2026-01-31 08:06:14.006411023 +0000 UTC m=+0.356544813 container died 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:06:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b-merged.mount: Deactivated successfully.
Jan 31 03:06:14 np0005603663 podman[75320]: 2026-01-31 08:06:14.046056154 +0000 UTC m=+0.396189954 container remove 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:06:14 np0005603663 systemd[1]: libpod-conmon-80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0.scope: Deactivated successfully.
Jan 31 03:06:14 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:14 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:14 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:14 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:14 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:14 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:14 np0005603663 systemd[1]: Starting Ceph mgr.compute-0.fqetdi for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:06:14 np0005603663 podman[75500]: 2026-01-31 08:06:14.771187183 +0000 UTC m=+0.032405885 container create 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/var/lib/ceph/mgr/ceph-compute-0.fqetdi supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 podman[75500]: 2026-01-31 08:06:14.816681851 +0000 UTC m=+0.077900553 container init 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:06:14 np0005603663 podman[75500]: 2026-01-31 08:06:14.824646779 +0000 UTC m=+0.085865481 container start 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 31 03:06:14 np0005603663 bash[75500]: 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab
Jan 31 03:06:14 np0005603663 podman[75500]: 2026-01-31 08:06:14.75601053 +0000 UTC m=+0.017229262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:14 np0005603663 systemd[1]: Started Ceph mgr.compute-0.fqetdi for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:14 np0005603663 ceph-mgr[75519]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:06:14 np0005603663 ceph-mgr[75519]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 03:06:14 np0005603663 ceph-mgr[75519]: pidfile_write: ignore empty --pid-file
Jan 31 03:06:14 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'alerts'
Jan 31 03:06:14 np0005603663 podman[75520]: 2026-01-31 08:06:14.902557211 +0000 UTC m=+0.042526974 container create 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:06:14 np0005603663 systemd[1]: Started libpod-conmon-7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4.scope.
Jan 31 03:06:14 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:14 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'balancer'
Jan 31 03:06:14 np0005603663 podman[75520]: 2026-01-31 08:06:14.887909663 +0000 UTC m=+0.027879446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:15 np0005603663 podman[75520]: 2026-01-31 08:06:14.992627881 +0000 UTC m=+0.132597724 container init 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:06:15 np0005603663 podman[75520]: 2026-01-31 08:06:15.027365172 +0000 UTC m=+0.167334955 container start 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:15 np0005603663 podman[75520]: 2026-01-31 08:06:15.030558163 +0000 UTC m=+0.170528006 container attach 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:15 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'cephadm'
Jan 31 03:06:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 03:06:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154000332' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 03:06:15 np0005603663 interesting_carver[75557]: 
Jan 31 03:06:15 np0005603663 interesting_carver[75557]: {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "health": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "status": "HEALTH_OK",
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "checks": {},
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "mutes": []
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "election_epoch": 5,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "quorum": [
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        0
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    ],
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "quorum_names": [
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "compute-0"
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    ],
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "quorum_age": 1,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "monmap": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "epoch": 1,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "min_mon_release_name": "tentacle",
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_mons": 1
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "osdmap": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "epoch": 1,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_osds": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_up_osds": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "osd_up_since": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_in_osds": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "osd_in_since": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_remapped_pgs": 0
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "pgmap": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "pgs_by_state": [],
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_pgs": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_pools": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_objects": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "data_bytes": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "bytes_used": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "bytes_avail": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "bytes_total": 0
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "fsmap": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "epoch": 1,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "by_rank": [],
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "up:standby": 0
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "mgrmap": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "available": false,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "num_standbys": 0,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "modules": [
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:            "iostat",
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:            "nfs"
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        ],
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "services": {}
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "servicemap": {
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "epoch": 1,
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:        "services": {}
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    },
Jan 31 03:06:15 np0005603663 interesting_carver[75557]:    "progress_events": {}
Jan 31 03:06:15 np0005603663 interesting_carver[75557]: }
Jan 31 03:06:15 np0005603663 systemd[1]: libpod-7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4.scope: Deactivated successfully.
Jan 31 03:06:15 np0005603663 podman[75520]: 2026-01-31 08:06:15.211593299 +0000 UTC m=+0.351563102 container died 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884-merged.mount: Deactivated successfully.
Jan 31 03:06:15 np0005603663 podman[75520]: 2026-01-31 08:06:15.257679223 +0000 UTC m=+0.397648976 container remove 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:15 np0005603663 systemd[1]: libpod-conmon-7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4.scope: Deactivated successfully.
Jan 31 03:06:15 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'crash'
Jan 31 03:06:15 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'dashboard'
Jan 31 03:06:16 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'devicehealth'
Jan 31 03:06:16 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 03:06:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 03:06:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 03:06:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]:  from numpy import show_config as show_numpy_config
Jan 31 03:06:16 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'influx'
Jan 31 03:06:16 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'insights'
Jan 31 03:06:16 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'iostat'
Jan 31 03:06:16 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'k8sevents'
Jan 31 03:06:17 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'localpool'
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.315286209 +0000 UTC m=+0.040869667 container create ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:06:17 np0005603663 systemd[1]: Started libpod-conmon-ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db.scope.
Jan 31 03:06:17 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 03:06:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.29571864 +0000 UTC m=+0.021302148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.410741792 +0000 UTC m=+0.136325280 container init ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.415946161 +0000 UTC m=+0.141529659 container start ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.425335538 +0000 UTC m=+0.150919087 container attach ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:06:17 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'mirroring'
Jan 31 03:06:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 03:06:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422912478' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]: 
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]: {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "health": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "status": "HEALTH_OK",
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "checks": {},
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "mutes": []
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "election_epoch": 5,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "quorum": [
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        0
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    ],
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "quorum_names": [
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "compute-0"
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    ],
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "quorum_age": 4,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "monmap": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "epoch": 1,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "min_mon_release_name": "tentacle",
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_mons": 1
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "osdmap": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "epoch": 1,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_osds": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_up_osds": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "osd_up_since": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_in_osds": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "osd_in_since": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_remapped_pgs": 0
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "pgmap": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "pgs_by_state": [],
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_pgs": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_pools": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_objects": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "data_bytes": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "bytes_used": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "bytes_avail": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "bytes_total": 0
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "fsmap": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "epoch": 1,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "by_rank": [],
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "up:standby": 0
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "mgrmap": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "available": false,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "num_standbys": 0,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "modules": [
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:            "iostat",
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:            "nfs"
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        ],
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "services": {}
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "servicemap": {
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "epoch": 1,
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:        "services": {}
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    },
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]:    "progress_events": {}
Jan 31 03:06:17 np0005603663 romantic_goldwasser[75624]: }
Jan 31 03:06:17 np0005603663 systemd[1]: libpod-ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db.scope: Deactivated successfully.
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.634820715 +0000 UTC m=+0.360404213 container died ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a-merged.mount: Deactivated successfully.
Jan 31 03:06:17 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'nfs'
Jan 31 03:06:17 np0005603663 podman[75607]: 2026-01-31 08:06:17.672293695 +0000 UTC m=+0.397877193 container remove ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:06:17 np0005603663 systemd[1]: libpod-conmon-ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db.scope: Deactivated successfully.
Jan 31 03:06:17 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'orchestrator'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'osd_support'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'progress'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'prometheus'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'rbd_support'
Jan 31 03:06:18 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'rgw'
Jan 31 03:06:19 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'rook'
Jan 31 03:06:19 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'selftest'
Jan 31 03:06:19 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'smb'
Jan 31 03:06:19 np0005603663 podman[75662]: 2026-01-31 08:06:19.739362361 +0000 UTC m=+0.045314314 container create 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:19 np0005603663 systemd[1]: Started libpod-conmon-63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058.scope.
Jan 31 03:06:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:19 np0005603663 podman[75662]: 2026-01-31 08:06:19.719786912 +0000 UTC m=+0.025738895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:19 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'snap_schedule'
Jan 31 03:06:19 np0005603663 podman[75662]: 2026-01-31 08:06:19.860912519 +0000 UTC m=+0.166864522 container init 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:19 np0005603663 podman[75662]: 2026-01-31 08:06:19.866098417 +0000 UTC m=+0.172050370 container start 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:19 np0005603663 podman[75662]: 2026-01-31 08:06:19.88269516 +0000 UTC m=+0.188647203 container attach 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:06:19 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'stats'
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'status'
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253638719' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 03:06:20 np0005603663 romantic_saha[75678]: 
Jan 31 03:06:20 np0005603663 romantic_saha[75678]: {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "health": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "status": "HEALTH_OK",
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "checks": {},
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "mutes": []
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "election_epoch": 5,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "quorum": [
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        0
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    ],
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "quorum_names": [
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "compute-0"
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    ],
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "quorum_age": 6,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "monmap": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "epoch": 1,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "min_mon_release_name": "tentacle",
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_mons": 1
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "osdmap": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "epoch": 1,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_osds": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_up_osds": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "osd_up_since": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_in_osds": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "osd_in_since": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_remapped_pgs": 0
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "pgmap": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "pgs_by_state": [],
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_pgs": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_pools": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_objects": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "data_bytes": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "bytes_used": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "bytes_avail": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "bytes_total": 0
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "fsmap": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "epoch": 1,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "by_rank": [],
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "up:standby": 0
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "mgrmap": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "available": false,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "num_standbys": 0,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "modules": [
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:            "iostat",
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:            "nfs"
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        ],
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "services": {}
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "servicemap": {
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "epoch": 1,
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:        "services": {}
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    },
Jan 31 03:06:20 np0005603663 romantic_saha[75678]:    "progress_events": {}
Jan 31 03:06:20 np0005603663 romantic_saha[75678]: }
Jan 31 03:06:20 np0005603663 systemd[1]: libpod-63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058.scope: Deactivated successfully.
Jan 31 03:06:20 np0005603663 podman[75662]: 2026-01-31 08:06:20.060015659 +0000 UTC m=+0.365967632 container died 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:20 np0005603663 systemd[1]: var-lib-containers-storage-overlay-78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a-merged.mount: Deactivated successfully.
Jan 31 03:06:20 np0005603663 podman[75662]: 2026-01-31 08:06:20.093629418 +0000 UTC m=+0.399581371 container remove 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'telegraf'
Jan 31 03:06:20 np0005603663 systemd[1]: libpod-conmon-63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058.scope: Deactivated successfully.
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'telemetry'
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'volumes'
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: ms_deliver_dispatch: unhandled message 0x55a9c8a8b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fqetdi
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr handle_mgr_map Activating!
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.fqetdi(active, starting, since 0.00868354s)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr handle_mgr_map I am now activating
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: balancer
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [balancer INFO root] Starting
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: crash
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:20
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Manager daemon compute-0.fqetdi is now available
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [balancer INFO root] No pools available
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: devicehealth
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: iostat
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Starting
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: nfs
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: orchestrator
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: pg_autoscaler
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: progress
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [progress INFO root] Loading...
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [progress INFO root] No stored events to load
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [progress INFO root] Loaded [] historic events
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] recovery thread starting
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] starting setup
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: rbd_support
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: status
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: telemetry
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] PerfHandler: starting
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TaskHandler: starting
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] setup complete
Jan 31 03:06:20 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: volumes
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 31 03:06:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: Activating manager daemon compute-0.fqetdi
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: Manager daemon compute-0.fqetdi is now available
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:21 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.fqetdi(active, since 1.01893s)
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.165904462 +0000 UTC m=+0.051569132 container create ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:06:22 np0005603663 systemd[1]: Started libpod-conmon-ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b.scope.
Jan 31 03:06:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.144434659 +0000 UTC m=+0.030099379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.245719189 +0000 UTC m=+0.131383879 container init ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.252119132 +0000 UTC m=+0.137783802 container start ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.254652834 +0000 UTC m=+0.140317534 container attach ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 03:06:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842329094' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 03:06:22 np0005603663 zen_bartik[75812]: 
Jan 31 03:06:22 np0005603663 zen_bartik[75812]: {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "health": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "status": "HEALTH_OK",
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "checks": {},
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "mutes": []
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "election_epoch": 5,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "quorum": [
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        0
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    ],
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "quorum_names": [
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "compute-0"
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    ],
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "quorum_age": 9,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "monmap": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "epoch": 1,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "min_mon_release_name": "tentacle",
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_mons": 1
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "osdmap": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "epoch": 1,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_osds": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_up_osds": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "osd_up_since": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_in_osds": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "osd_in_since": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_remapped_pgs": 0
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "pgmap": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "pgs_by_state": [],
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_pgs": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_pools": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_objects": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "data_bytes": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "bytes_used": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "bytes_avail": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "bytes_total": 0
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "fsmap": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "epoch": 1,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "by_rank": [],
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "up:standby": 0
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "mgrmap": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "available": true,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "num_standbys": 0,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "modules": [
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:            "iostat",
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:            "nfs"
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        ],
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "services": {}
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "servicemap": {
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "epoch": 1,
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:        "services": {}
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    },
Jan 31 03:06:22 np0005603663 zen_bartik[75812]:    "progress_events": {}
Jan 31 03:06:22 np0005603663 zen_bartik[75812]: }
Jan 31 03:06:22 np0005603663 systemd[1]: libpod-ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b.scope: Deactivated successfully.
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.796087032 +0000 UTC m=+0.681751752 container died ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:06:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967-merged.mount: Deactivated successfully.
Jan 31 03:06:22 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:22 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:22 np0005603663 podman[75795]: 2026-01-31 08:06:22.907655245 +0000 UTC m=+0.793319915 container remove ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:06:22 np0005603663 systemd[1]: libpod-conmon-ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b.scope: Deactivated successfully.
Jan 31 03:06:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.fqetdi(active, since 2s)
Jan 31 03:06:22 np0005603663 podman[75850]: 2026-01-31 08:06:22.966055281 +0000 UTC m=+0.043108981 container create 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:06:23 np0005603663 systemd[1]: Started libpod-conmon-4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3.scope.
Jan 31 03:06:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 podman[75850]: 2026-01-31 08:06:22.942918361 +0000 UTC m=+0.019972081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:23 np0005603663 podman[75850]: 2026-01-31 08:06:23.042218034 +0000 UTC m=+0.119271804 container init 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:23 np0005603663 podman[75850]: 2026-01-31 08:06:23.050641615 +0000 UTC m=+0.127695355 container start 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:06:23 np0005603663 podman[75850]: 2026-01-31 08:06:23.054619988 +0000 UTC m=+0.131673778 container attach 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:06:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 03:06:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/394067677' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 03:06:23 np0005603663 inspiring_bassi[75866]: 
Jan 31 03:06:23 np0005603663 inspiring_bassi[75866]: [global]
Jan 31 03:06:23 np0005603663 inspiring_bassi[75866]: #011fsid = 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:06:23 np0005603663 inspiring_bassi[75866]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 03:06:23 np0005603663 inspiring_bassi[75866]: #011osd_crush_chooseleaf_type = 0
Jan 31 03:06:23 np0005603663 systemd[1]: libpod-4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3.scope: Deactivated successfully.
Jan 31 03:06:23 np0005603663 podman[75850]: 2026-01-31 08:06:23.457014969 +0000 UTC m=+0.534068679 container died 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:06:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac-merged.mount: Deactivated successfully.
Jan 31 03:06:23 np0005603663 podman[75850]: 2026-01-31 08:06:23.488138017 +0000 UTC m=+0.565191747 container remove 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:23 np0005603663 systemd[1]: libpod-conmon-4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3.scope: Deactivated successfully.
Jan 31 03:06:23 np0005603663 podman[75904]: 2026-01-31 08:06:23.534592623 +0000 UTC m=+0.032901880 container create f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:06:23 np0005603663 systemd[1]: Started libpod-conmon-f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d.scope.
Jan 31 03:06:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:23 np0005603663 podman[75904]: 2026-01-31 08:06:23.599826134 +0000 UTC m=+0.098135491 container init f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:23 np0005603663 podman[75904]: 2026-01-31 08:06:23.605544377 +0000 UTC m=+0.103853674 container start f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:23 np0005603663 podman[75904]: 2026-01-31 08:06:23.609536991 +0000 UTC m=+0.107846368 container attach f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:23 np0005603663 podman[75904]: 2026-01-31 08:06:23.520410668 +0000 UTC m=+0.018719955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:23 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/394067677' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 03:06:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 31 03:06:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:24 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 03:06:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  1: '-n'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  2: 'mgr.compute-0.fqetdi'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  3: '-f'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  4: '--setuser'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  5: 'ceph'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  6: '--setgroup'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  7: 'ceph'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 03:06:24 np0005603663 ceph-mgr[75519]: mgr respawn  exe_path /proc/self/exe
Jan 31 03:06:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.fqetdi(active, since 4s)
Jan 31 03:06:24 np0005603663 systemd[1]: libpod-f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d.scope: Deactivated successfully.
Jan 31 03:06:24 np0005603663 podman[75904]: 2026-01-31 08:06:24.994020371 +0000 UTC m=+1.492329628 container died f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:06:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0-merged.mount: Deactivated successfully.
Jan 31 03:06:25 np0005603663 podman[75904]: 2026-01-31 08:06:25.023367619 +0000 UTC m=+1.521676866 container remove f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:06:25 np0005603663 systemd[1]: libpod-conmon-f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d.scope: Deactivated successfully.
Jan 31 03:06:25 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: ignoring --setuser ceph since I am not root
Jan 31 03:06:25 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: ignoring --setgroup ceph since I am not root
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: pidfile_write: ignore empty --pid-file
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'alerts'
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.078062509 +0000 UTC m=+0.038715505 container create 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:25 np0005603663 systemd[1]: Started libpod-conmon-39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d.scope.
Jan 31 03:06:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.15451385 +0000 UTC m=+0.115166846 container init 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.158506834 +0000 UTC m=+0.119159830 container start 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.063184345 +0000 UTC m=+0.023837361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.161900221 +0000 UTC m=+0.122553237 container attach 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'balancer'
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'cephadm'
Jan 31 03:06:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 03:06:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544431121' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 03:06:25 np0005603663 ecstatic_ride[75994]: {
Jan 31 03:06:25 np0005603663 ecstatic_ride[75994]:    "epoch": 5,
Jan 31 03:06:25 np0005603663 ecstatic_ride[75994]:    "available": true,
Jan 31 03:06:25 np0005603663 ecstatic_ride[75994]:    "active_name": "compute-0.fqetdi",
Jan 31 03:06:25 np0005603663 ecstatic_ride[75994]:    "num_standby": 0
Jan 31 03:06:25 np0005603663 ecstatic_ride[75994]: }
Jan 31 03:06:25 np0005603663 systemd[1]: libpod-39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d.scope: Deactivated successfully.
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.625090526 +0000 UTC m=+0.585743562 container died 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 03:06:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20-merged.mount: Deactivated successfully.
Jan 31 03:06:25 np0005603663 podman[75958]: 2026-01-31 08:06:25.677119081 +0000 UTC m=+0.637772077 container remove 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:25 np0005603663 systemd[1]: libpod-conmon-39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d.scope: Deactivated successfully.
Jan 31 03:06:25 np0005603663 podman[76042]: 2026-01-31 08:06:25.748364564 +0000 UTC m=+0.051347026 container create d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:06:25 np0005603663 systemd[1]: Started libpod-conmon-d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7.scope.
Jan 31 03:06:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:25 np0005603663 podman[76042]: 2026-01-31 08:06:25.727686094 +0000 UTC m=+0.030668586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:25 np0005603663 podman[76042]: 2026-01-31 08:06:25.824214528 +0000 UTC m=+0.127196990 container init d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:25 np0005603663 podman[76042]: 2026-01-31 08:06:25.830676052 +0000 UTC m=+0.133658514 container start d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:06:25 np0005603663 podman[76042]: 2026-01-31 08:06:25.837093765 +0000 UTC m=+0.140076227 container attach d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'crash'
Jan 31 03:06:25 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'dashboard'
Jan 31 03:06:25 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 03:06:26 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'devicehealth'
Jan 31 03:06:26 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 03:06:26 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 03:06:26 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 03:06:26 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]:  from numpy import show_config as show_numpy_config
Jan 31 03:06:26 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'influx'
Jan 31 03:06:26 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'insights'
Jan 31 03:06:26 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'iostat'
Jan 31 03:06:27 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'k8sevents'
Jan 31 03:06:27 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'localpool'
Jan 31 03:06:27 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 03:06:27 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'mirroring'
Jan 31 03:06:27 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'nfs'
Jan 31 03:06:27 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'orchestrator'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'osd_support'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'progress'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'prometheus'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'rbd_support'
Jan 31 03:06:28 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'rgw'
Jan 31 03:06:29 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'rook'
Jan 31 03:06:29 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'selftest'
Jan 31 03:06:29 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'smb'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'snap_schedule'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'stats'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'status'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'telegraf'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'telemetry'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: mgr[py] Loading python module 'volumes'
Jan 31 03:06:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fqetdi restarted
Jan 31 03:06:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 03:06:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:06:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fqetdi
Jan 31 03:06:30 np0005603663 ceph-mgr[75519]: ms_deliver_dispatch: unhandled message 0x555800902000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: mgr handle_mgr_map Activating!
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: mgr handle_mgr_map I am now activating
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.fqetdi(active, starting, since 0.59262s)
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} v 0)
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} : dispatch
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: balancer
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Starting
Jan 31 03:06:31 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Manager daemon compute-0.fqetdi is now available
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:31
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:06:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] No pools available
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: Active manager daemon compute-0.fqetdi restarted
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: Activating manager daemon compute-0.fqetdi
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.fqetdi(active, since 1.7327s)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 03:06:32 np0005603663 upbeat_meninsky[76059]: {
Jan 31 03:06:32 np0005603663 upbeat_meninsky[76059]:    "mgrmap_epoch": 7,
Jan 31 03:06:32 np0005603663 upbeat_meninsky[76059]:    "initialized": true
Jan 31 03:06:32 np0005603663 upbeat_meninsky[76059]: }
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: cephadm
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: crash
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: devicehealth
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Starting
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: iostat
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: nfs
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: orchestrator
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: pg_autoscaler
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: progress
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [progress INFO root] Loading...
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [progress INFO root] No stored events to load
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [progress INFO root] Loaded [] historic events
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 03:06:32 np0005603663 systemd[1]: libpod-d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7.scope: Deactivated successfully.
Jan 31 03:06:32 np0005603663 podman[76042]: 2026-01-31 08:06:32.768216017 +0000 UTC m=+7.071198479 container died d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] recovery thread starting
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] starting setup
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: rbd_support
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: status
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: telemetry
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} v 0)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] PerfHandler: starting
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TaskHandler: starting
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} v 0)
Jan 31 03:06:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] setup complete
Jan 31 03:06:32 np0005603663 ceph-mgr[75519]: mgr load Constructed class from module: volumes
Jan 31 03:06:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161-merged.mount: Deactivated successfully.
Jan 31 03:06:32 np0005603663 podman[76042]: 2026-01-31 08:06:32.828448985 +0000 UTC m=+7.131431447 container remove d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:06:32 np0005603663 systemd[1]: libpod-conmon-d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7.scope: Deactivated successfully.
Jan 31 03:06:32 np0005603663 podman[76208]: 2026-01-31 08:06:32.876866687 +0000 UTC m=+0.029281187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:32 np0005603663 podman[76208]: 2026-01-31 08:06:32.982041107 +0000 UTC m=+0.134455607 container create 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:33 np0005603663 systemd[1]: Started libpod-conmon-797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919.scope.
Jan 31 03:06:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:33 np0005603663 podman[76208]: 2026-01-31 08:06:33.193593163 +0000 UTC m=+0.346007723 container init 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:33 np0005603663 podman[76208]: 2026-01-31 08:06:33.19944216 +0000 UTC m=+0.351856650 container start 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: Manager daemon compute-0.fqetdi is now available
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: Found migration_current of "None". Setting to last migration.
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 03:06:33 np0005603663 podman[76208]: 2026-01-31 08:06:33.244795004 +0000 UTC m=+0.397209544 container attach 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019902321 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:06:33 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 03:06:33 np0005603663 upbeat_nobel[76224]: module 'orchestrator' is already enabled (always-on)
Jan 31 03:06:33 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.fqetdi(active, since 2s)
Jan 31 03:06:33 np0005603663 systemd[1]: libpod-797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919.scope: Deactivated successfully.
Jan 31 03:06:33 np0005603663 podman[76208]: 2026-01-31 08:06:33.780883879 +0000 UTC m=+0.933298379 container died 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:06:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c-merged.mount: Deactivated successfully.
Jan 31 03:06:33 np0005603663 podman[76208]: 2026-01-31 08:06:33.823036912 +0000 UTC m=+0.975451382 container remove 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:06:33 np0005603663 systemd[1]: libpod-conmon-797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919.scope: Deactivated successfully.
Jan 31 03:06:33 np0005603663 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:33] ENGINE Bus STARTING
Jan 31 03:06:33 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:33] ENGINE Bus STARTING
Jan 31 03:06:33 np0005603663 podman[76262]: 2026-01-31 08:06:33.889810217 +0000 UTC m=+0.046126707 container create 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:33 np0005603663 systemd[1]: Started libpod-conmon-1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b.scope.
Jan 31 03:06:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:33 np0005603663 podman[76262]: 2026-01-31 08:06:33.870909508 +0000 UTC m=+0.027225998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:33 np0005603663 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:33] ENGINE Serving on http://192.168.122.100:8765
Jan 31 03:06:33 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:33] ENGINE Serving on http://192.168.122.100:8765
Jan 31 03:06:34 np0005603663 podman[76262]: 2026-01-31 08:06:34.061742412 +0000 UTC m=+0.218058922 container init 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:34 np0005603663 podman[76262]: 2026-01-31 08:06:34.068428233 +0000 UTC m=+0.224744713 container start 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:06:34 np0005603663 podman[76262]: 2026-01-31 08:06:34.097481362 +0000 UTC m=+0.253797842 container attach 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:34] ENGINE Serving on https://192.168.122.100:7150
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:34] ENGINE Serving on https://192.168.122.100:7150
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:34] ENGINE Bus STARTED
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:34] ENGINE Bus STARTED
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:34] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:34] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: [31/Jan/2026:08:06:33] ENGINE Bus STARTING
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: [31/Jan/2026:08:06:33] ENGINE Serving on http://192.168.122.100:8765
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 03:06:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 03:06:34 np0005603663 systemd[1]: libpod-1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b.scope: Deactivated successfully.
Jan 31 03:06:34 np0005603663 podman[76262]: 2026-01-31 08:06:34.545160124 +0000 UTC m=+0.701476574 container died 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac-merged.mount: Deactivated successfully.
Jan 31 03:06:34 np0005603663 podman[76262]: 2026-01-31 08:06:34.579777862 +0000 UTC m=+0.736094322 container remove 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:06:34 np0005603663 systemd[1]: libpod-conmon-1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b.scope: Deactivated successfully.
Jan 31 03:06:34 np0005603663 podman[76339]: 2026-01-31 08:06:34.634505223 +0000 UTC m=+0.041659219 container create 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:34 np0005603663 systemd[1]: Started libpod-conmon-40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501.scope.
Jan 31 03:06:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:34 np0005603663 podman[76339]: 2026-01-31 08:06:34.701061882 +0000 UTC m=+0.108215928 container init 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:34 np0005603663 podman[76339]: 2026-01-31 08:06:34.613227836 +0000 UTC m=+0.020381932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:34 np0005603663 podman[76339]: 2026-01-31 08:06:34.708124324 +0000 UTC m=+0.115278350 container start 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:34 np0005603663 podman[76339]: 2026-01-31 08:06:34.712532349 +0000 UTC m=+0.119686375 container attach 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:34 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_user
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_config
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 03:06:35 np0005603663 nervous_roentgen[76356]: ssh user set to ceph-admin. sudo will be used
Jan 31 03:06:35 np0005603663 systemd[1]: libpod-40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603663 podman[76339]: 2026-01-31 08:06:35.139143641 +0000 UTC m=+0.546297647 container died 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:06:35 np0005603663 systemd[1]: var-lib-containers-storage-overlay-32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd-merged.mount: Deactivated successfully.
Jan 31 03:06:35 np0005603663 podman[76339]: 2026-01-31 08:06:35.16888978 +0000 UTC m=+0.576043776 container remove 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:06:35 np0005603663 systemd[1]: libpod-conmon-40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.229103968 +0000 UTC m=+0.046406215 container create f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:06:35 np0005603663 systemd[1]: Started libpod-conmon-f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293.scope.
Jan 31 03:06:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.204775874 +0000 UTC m=+0.022078161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.30593185 +0000 UTC m=+0.123234087 container init f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.319375223 +0000 UTC m=+0.136677490 container start f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.324375586 +0000 UTC m=+0.141677843 container attach f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: [31/Jan/2026:08:06:34] ENGINE Serving on https://192.168.122.100:7150
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: [31/Jan/2026:08:06:34] ENGINE Bus STARTED
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: [31/Jan/2026:08:06:34] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 31 03:06:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Set ssh private key
Jan 31 03:06:35 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 03:06:35 np0005603663 systemd[1]: libpod-f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.785555934 +0000 UTC m=+0.602858171 container died f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:35 np0005603663 systemd[1]: var-lib-containers-storage-overlay-dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c-merged.mount: Deactivated successfully.
Jan 31 03:06:35 np0005603663 podman[76393]: 2026-01-31 08:06:35.814559772 +0000 UTC m=+0.631862009 container remove f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:35 np0005603663 systemd[1]: libpod-conmon-f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603663 podman[76447]: 2026-01-31 08:06:35.85901198 +0000 UTC m=+0.029677318 container create 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:06:35 np0005603663 systemd[1]: Started libpod-conmon-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope.
Jan 31 03:06:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:35 np0005603663 podman[76447]: 2026-01-31 08:06:35.918556079 +0000 UTC m=+0.089221387 container init 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:06:35 np0005603663 podman[76447]: 2026-01-31 08:06:35.927756851 +0000 UTC m=+0.098422189 container start 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:06:35 np0005603663 podman[76447]: 2026-01-31 08:06:35.932052374 +0000 UTC m=+0.102717712 container attach 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:06:35 np0005603663 podman[76447]: 2026-01-31 08:06:35.844703622 +0000 UTC m=+0.015368940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:36 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:36 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 03:06:36 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 03:06:36 np0005603663 systemd[1]: libpod-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope: Deactivated successfully.
Jan 31 03:06:36 np0005603663 conmon[76463]: conmon 8a7d52f7b432d4f0791f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope/container/memory.events
Jan 31 03:06:36 np0005603663 podman[76447]: 2026-01-31 08:06:36.618229051 +0000 UTC m=+0.788894359 container died 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7-merged.mount: Deactivated successfully.
Jan 31 03:06:36 np0005603663 podman[76447]: 2026-01-31 08:06:36.650280036 +0000 UTC m=+0.820945344 container remove 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:36 np0005603663 systemd[1]: libpod-conmon-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope: Deactivated successfully.
Jan 31 03:06:36 np0005603663 podman[76501]: 2026-01-31 08:06:36.726793319 +0000 UTC m=+0.061958209 container create 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:36 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:36 np0005603663 systemd[1]: Started libpod-conmon-3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903.scope.
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: Set ssh ssh_user
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: Set ssh ssh_config
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: ssh user set to ceph-admin. sudo will be used
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: Set ssh ssh_identity_key
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: Set ssh private key
Jan 31 03:06:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:36 np0005603663 podman[76501]: 2026-01-31 08:06:36.685158631 +0000 UTC m=+0.020323571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:36 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:36 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:36 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:36 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:36 np0005603663 podman[76501]: 2026-01-31 08:06:36.822831139 +0000 UTC m=+0.157996039 container init 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:36 np0005603663 podman[76501]: 2026-01-31 08:06:36.829046956 +0000 UTC m=+0.164211876 container start 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 31 03:06:36 np0005603663 podman[76501]: 2026-01-31 08:06:36.833499993 +0000 UTC m=+0.168664893 container attach 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:37 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:37 np0005603663 cool_keldysh[76517]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBX4tgy1ewLcXde4iDx3CKeVUkIjIVtx9Ubv8Ne1qBY2gF18JFQDhG9+Uj8MmytXfbfbslwMxIRDThV4W4akGN/R6C+84jmeDosYAUouDsqoQ+ZSR52oY4W75/cTFCaXQyMa0I2alM942WPSngyKA12FpaluWyAaN9ZlrhM7LHe7zF9oEg8yrX02rbnzU+5fleC/Q9H1jArgVklTV5r/dLiDj+H/ZYjb1zNROtH9pH7rWKS9CB7lCeflFijdGli5ChbWFLosewDRqB4IO2D/Xb64a3YLAnqrCmwFRUTZyG5dt40IPkhOqP6Cpr5V+xjYzejxIJ2HIVWZ3/MDyDhNRKHDjwG6z+Mdzb/fsJAUCzWLPyZrq7RxOThV4tOXL57arAZHdsl7tt4LWvwt2gUxSqsmFiGEiGYXkqKS89UHvpHpyHtvqAcvKUlUJX2YZNyOkc+doGJN6EeBpc+1Buwcp5aV/Xw/Fs77zMkzBRA+HOTb+hMd1R9HHZqt195rKwevE= zuul@controller
Jan 31 03:06:37 np0005603663 systemd[1]: libpod-3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903.scope: Deactivated successfully.
Jan 31 03:06:37 np0005603663 podman[76501]: 2026-01-31 08:06:37.251927491 +0000 UTC m=+0.587092381 container died 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:06:37 np0005603663 systemd[1]: var-lib-containers-storage-overlay-55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2-merged.mount: Deactivated successfully.
Jan 31 03:06:37 np0005603663 podman[76501]: 2026-01-31 08:06:37.292407854 +0000 UTC m=+0.627572754 container remove 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:37 np0005603663 systemd[1]: libpod-conmon-3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903.scope: Deactivated successfully.
Jan 31 03:06:37 np0005603663 podman[76555]: 2026-01-31 08:06:37.368156546 +0000 UTC m=+0.055783576 container create aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:37 np0005603663 systemd[1]: Started libpod-conmon-aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1.scope.
Jan 31 03:06:37 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:37 np0005603663 podman[76555]: 2026-01-31 08:06:37.34500455 +0000 UTC m=+0.032631650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:37 np0005603663 podman[76555]: 2026-01-31 08:06:37.505905259 +0000 UTC m=+0.193532339 container init aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:37 np0005603663 podman[76555]: 2026-01-31 08:06:37.511723118 +0000 UTC m=+0.199350168 container start aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:06:37 np0005603663 podman[76555]: 2026-01-31 08:06:37.5163638 +0000 UTC m=+0.203990850 container attach aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:06:37 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:37 np0005603663 ceph-mon[75227]: Set ssh ssh_identity_pub
Jan 31 03:06:37 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:38 np0005603663 systemd-logind[793]: New session 21 of user ceph-admin.
Jan 31 03:06:38 np0005603663 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 03:06:38 np0005603663 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 03:06:38 np0005603663 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 03:06:38 np0005603663 systemd[1]: Starting User Manager for UID 42477...
Jan 31 03:06:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052584 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:06:38 np0005603663 systemd[76601]: Queued start job for default target Main User Target.
Jan 31 03:06:38 np0005603663 systemd-logind[793]: New session 23 of user ceph-admin.
Jan 31 03:06:38 np0005603663 systemd[76601]: Created slice User Application Slice.
Jan 31 03:06:38 np0005603663 systemd[76601]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:06:38 np0005603663 systemd[76601]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:06:38 np0005603663 systemd[76601]: Reached target Paths.
Jan 31 03:06:38 np0005603663 systemd[76601]: Reached target Timers.
Jan 31 03:06:38 np0005603663 systemd[76601]: Starting D-Bus User Message Bus Socket...
Jan 31 03:06:38 np0005603663 systemd[76601]: Starting Create User's Volatile Files and Directories...
Jan 31 03:06:38 np0005603663 systemd[76601]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:06:38 np0005603663 systemd[76601]: Reached target Sockets.
Jan 31 03:06:38 np0005603663 systemd[76601]: Finished Create User's Volatile Files and Directories.
Jan 31 03:06:38 np0005603663 systemd[76601]: Reached target Basic System.
Jan 31 03:06:38 np0005603663 systemd[76601]: Reached target Main User Target.
Jan 31 03:06:38 np0005603663 systemd[76601]: Startup finished in 156ms.
Jan 31 03:06:38 np0005603663 systemd[1]: Started User Manager for UID 42477.
Jan 31 03:06:38 np0005603663 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 03:06:38 np0005603663 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 03:06:38 np0005603663 systemd-logind[793]: New session 24 of user ceph-admin.
Jan 31 03:06:38 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:38 np0005603663 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 03:06:39 np0005603663 systemd-logind[793]: New session 25 of user ceph-admin.
Jan 31 03:06:39 np0005603663 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 03:06:39 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 03:06:39 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 03:06:39 np0005603663 systemd-logind[793]: New session 26 of user ceph-admin.
Jan 31 03:06:39 np0005603663 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 03:06:39 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:39 np0005603663 systemd-logind[793]: New session 27 of user ceph-admin.
Jan 31 03:06:39 np0005603663 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 03:06:40 np0005603663 ceph-mon[75227]: Deploying cephadm binary to compute-0
Jan 31 03:06:40 np0005603663 systemd-logind[793]: New session 28 of user ceph-admin.
Jan 31 03:06:40 np0005603663 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 03:06:40 np0005603663 systemd-logind[793]: New session 29 of user ceph-admin.
Jan 31 03:06:40 np0005603663 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 03:06:40 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:40 np0005603663 systemd-logind[793]: New session 30 of user ceph-admin.
Jan 31 03:06:40 np0005603663 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 03:06:41 np0005603663 systemd-logind[793]: New session 31 of user ceph-admin.
Jan 31 03:06:41 np0005603663 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 03:06:41 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:42 np0005603663 systemd-logind[793]: New session 32 of user ceph-admin.
Jan 31 03:06:42 np0005603663 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 03:06:42 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:42 np0005603663 systemd-logind[793]: New session 33 of user ceph-admin.
Jan 31 03:06:42 np0005603663 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 03:06:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:06:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 03:06:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:43 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Added host compute-0
Jan 31 03:06:43 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 03:06:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 03:06:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 03:06:43 np0005603663 busy_cray[76571]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 03:06:43 np0005603663 systemd[1]: libpod-aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1.scope: Deactivated successfully.
Jan 31 03:06:43 np0005603663 podman[76555]: 2026-01-31 08:06:43.39880585 +0000 UTC m=+6.086432960 container died aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:43 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba-merged.mount: Deactivated successfully.
Jan 31 03:06:43 np0005603663 podman[76555]: 2026-01-31 08:06:43.443046274 +0000 UTC m=+6.130673324 container remove aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:43 np0005603663 systemd[1]: libpod-conmon-aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1.scope: Deactivated successfully.
Jan 31 03:06:43 np0005603663 podman[77002]: 2026-01-31 08:06:43.510331806 +0000 UTC m=+0.043684145 container create a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:43 np0005603663 systemd[1]: Started libpod-conmon-a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47.scope.
Jan 31 03:06:43 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:43 np0005603663 podman[77002]: 2026-01-31 08:06:43.494078858 +0000 UTC m=+0.027431217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:43 np0005603663 podman[77002]: 2026-01-31 08:06:43.593656597 +0000 UTC m=+0.127009016 container init a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:43 np0005603663 podman[77002]: 2026-01-31 08:06:43.60115552 +0000 UTC m=+0.134507869 container start a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:06:43 np0005603663 podman[77002]: 2026-01-31 08:06:43.604705983 +0000 UTC m=+0.138058422 container attach a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:43 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:44 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:44 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 03:06:44 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 03:06:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 03:06:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:44 np0005603663 relaxed_banach[77030]: Scheduled mon update...
Jan 31 03:06:44 np0005603663 systemd[1]: libpod-a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47.scope: Deactivated successfully.
Jan 31 03:06:44 np0005603663 podman[77002]: 2026-01-31 08:06:44.107969391 +0000 UTC m=+0.641321730 container died a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:44 np0005603663 systemd[1]: var-lib-containers-storage-overlay-08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be-merged.mount: Deactivated successfully.
Jan 31 03:06:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:44 np0005603663 ceph-mon[75227]: Added host compute-0
Jan 31 03:06:44 np0005603663 ceph-mon[75227]: Saving service mon spec with placement count:5
Jan 31 03:06:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:44 np0005603663 podman[77002]: 2026-01-31 08:06:44.478882218 +0000 UTC m=+1.012234567 container remove a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:44 np0005603663 podman[77047]: 2026-01-31 08:06:44.508613943 +0000 UTC m=+0.803315299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:44 np0005603663 podman[77095]: 2026-01-31 08:06:44.586791696 +0000 UTC m=+0.084070630 container create 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:06:44 np0005603663 podman[77095]: 2026-01-31 08:06:44.535198452 +0000 UTC m=+0.032477436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:44 np0005603663 systemd[1]: Started libpod-conmon-209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552.scope.
Jan 31 03:06:44 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:44 np0005603663 systemd[1]: libpod-conmon-a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47.scope: Deactivated successfully.
Jan 31 03:06:44 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:44 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:44 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:44 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:44 np0005603663 podman[77095]: 2026-01-31 08:06:44.767203651 +0000 UTC m=+0.264482615 container init 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:44 np0005603663 podman[77095]: 2026-01-31 08:06:44.774688932 +0000 UTC m=+0.271967906 container start 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:44 np0005603663 podman[77125]: 2026-01-31 08:06:44.688634212 +0000 UTC m=+0.024130026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:44 np0005603663 podman[77095]: 2026-01-31 08:06:44.817003302 +0000 UTC m=+0.314282256 container attach 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:44 np0005603663 podman[77125]: 2026-01-31 08:06:44.84589525 +0000 UTC m=+0.181391084 container create 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:06:44 np0005603663 systemd[1]: Started libpod-conmon-3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04.scope.
Jan 31 03:06:44 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:45 np0005603663 podman[77125]: 2026-01-31 08:06:45.063907916 +0000 UTC m=+0.399403750 container init 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:06:45 np0005603663 podman[77125]: 2026-01-31 08:06:45.069091647 +0000 UTC m=+0.404587481 container start 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:45 np0005603663 podman[77125]: 2026-01-31 08:06:45.1555094 +0000 UTC m=+0.491005224 container attach 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:06:45 np0005603663 lucid_spence[77165]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 03:06:45 np0005603663 systemd[1]: libpod-3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04.scope: Deactivated successfully.
Jan 31 03:06:45 np0005603663 podman[77125]: 2026-01-31 08:06:45.167835812 +0000 UTC m=+0.503331606 container died 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:06:45 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:45 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 03:06:45 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 03:06:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 03:06:45 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:45 np0005603663 stupefied_hermann[77127]: Scheduled mgr update...
Jan 31 03:06:45 np0005603663 systemd[1]: libpod-209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552.scope: Deactivated successfully.
Jan 31 03:06:45 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3a1341c3f7e4358bddb770aa99959bfbaa44855e1231c21c4fe5c19c162f34df-merged.mount: Deactivated successfully.
Jan 31 03:06:45 np0005603663 podman[77125]: 2026-01-31 08:06:45.496218888 +0000 UTC m=+0.831714722 container remove 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:45 np0005603663 systemd[1]: libpod-conmon-3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04.scope: Deactivated successfully.
Jan 31 03:06:45 np0005603663 podman[77095]: 2026-01-31 08:06:45.534477319 +0000 UTC m=+1.031756293 container died 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 31 03:06:45 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:45 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e-merged.mount: Deactivated successfully.
Jan 31 03:06:45 np0005603663 podman[77183]: 2026-01-31 08:06:45.57933376 +0000 UTC m=+0.213650680 container remove 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:45 np0005603663 systemd[1]: libpod-conmon-209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552.scope: Deactivated successfully.
Jan 31 03:06:45 np0005603663 podman[77212]: 2026-01-31 08:06:45.634002934 +0000 UTC m=+0.041958819 container create a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:06:45 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:45 np0005603663 systemd[1]: Started libpod-conmon-a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce.scope.
Jan 31 03:06:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:45 np0005603663 podman[77212]: 2026-01-31 08:06:45.70913754 +0000 UTC m=+0.117093435 container init a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:06:45 np0005603663 podman[77212]: 2026-01-31 08:06:45.613420011 +0000 UTC m=+0.021375926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:45 np0005603663 podman[77212]: 2026-01-31 08:06:45.715246496 +0000 UTC m=+0.123202381 container start a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:06:45 np0005603663 podman[77212]: 2026-01-31 08:06:45.718528874 +0000 UTC m=+0.126484749 container attach a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:06:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:45 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:46 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 03:06:46 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 zen_haslett[77264]: Scheduled crash update...
Jan 31 03:06:46 np0005603663 systemd[1]: libpod-a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce.scope: Deactivated successfully.
Jan 31 03:06:46 np0005603663 podman[77212]: 2026-01-31 08:06:46.159643478 +0000 UTC m=+0.567599383 container died a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:46 np0005603663 systemd[1]: var-lib-containers-storage-overlay-839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542-merged.mount: Deactivated successfully.
Jan 31 03:06:46 np0005603663 podman[77212]: 2026-01-31 08:06:46.201891882 +0000 UTC m=+0.609847767 container remove a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:06:46 np0005603663 systemd[1]: libpod-conmon-a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce.scope: Deactivated successfully.
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.261462762 +0000 UTC m=+0.039744057 container create 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:06:46 np0005603663 systemd[1]: Started libpod-conmon-1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6.scope.
Jan 31 03:06:46 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: Saving service mgr spec with placement count:2
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.340499213 +0000 UTC m=+0.118780528 container init 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.246882425 +0000 UTC m=+0.025163770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.34640275 +0000 UTC m=+0.124684085 container start 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.350679149 +0000 UTC m=+0.128960464 container attach 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:46 np0005603663 podman[77445]: 2026-01-31 08:06:46.558109142 +0000 UTC m=+0.075998026 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:06:46 np0005603663 podman[77445]: 2026-01-31 08:06:46.640767012 +0000 UTC m=+0.158655906 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:46 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/697546649' entity='client.admin' 
Jan 31 03:06:46 np0005603663 systemd[1]: libpod-1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6.scope: Deactivated successfully.
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.830129861 +0000 UTC m=+0.608411186 container died 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:46 np0005603663 systemd[1]: var-lib-containers-storage-overlay-005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e-merged.mount: Deactivated successfully.
Jan 31 03:06:46 np0005603663 podman[77371]: 2026-01-31 08:06:46.873218061 +0000 UTC m=+0.651499356 container remove 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:46 np0005603663 systemd[1]: libpod-conmon-1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6.scope: Deactivated successfully.
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:46 np0005603663 podman[77544]: 2026-01-31 08:06:46.943022772 +0000 UTC m=+0.043604298 container create 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:06:46 np0005603663 systemd[1]: Started libpod-conmon-0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de.scope.
Jan 31 03:06:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:47 np0005603663 podman[77544]: 2026-01-31 08:06:46.924012373 +0000 UTC m=+0.024593879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:47 np0005603663 podman[77544]: 2026-01-31 08:06:47.027389478 +0000 UTC m=+0.127970994 container init 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:47 np0005603663 podman[77544]: 2026-01-31 08:06:47.03433079 +0000 UTC m=+0.134912316 container start 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:47 np0005603663 podman[77544]: 2026-01-31 08:06:47.037948379 +0000 UTC m=+0.138529895 container attach 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:06:47 np0005603663 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77647 (sysctl)
Jan 31 03:06:47 np0005603663 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 03:06:47 np0005603663 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 03:06:47 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:47 np0005603663 systemd[1]: libpod-0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de.scope: Deactivated successfully.
Jan 31 03:06:47 np0005603663 podman[77661]: 2026-01-31 08:06:47.512945305 +0000 UTC m=+0.022449313 container died 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:47 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b-merged.mount: Deactivated successfully.
Jan 31 03:06:47 np0005603663 podman[77661]: 2026-01-31 08:06:47.551547257 +0000 UTC m=+0.061051225 container remove 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:47 np0005603663 systemd[1]: libpod-conmon-0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de.scope: Deactivated successfully.
Jan 31 03:06:47 np0005603663 podman[77686]: 2026-01-31 08:06:47.619128536 +0000 UTC m=+0.047418435 container create 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:47 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:47 np0005603663 systemd[1]: Started libpod-conmon-83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247.scope.
Jan 31 03:06:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:47 np0005603663 podman[77686]: 2026-01-31 08:06:47.606005342 +0000 UTC m=+0.034295271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:47 np0005603663 podman[77686]: 2026-01-31 08:06:47.695712884 +0000 UTC m=+0.124002783 container init 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:06:47 np0005603663 podman[77686]: 2026-01-31 08:06:47.699991983 +0000 UTC m=+0.128281892 container start 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:06:47 np0005603663 podman[77686]: 2026-01-31 08:06:47.703332847 +0000 UTC m=+0.131622766 container attach 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: Saving service crash spec with placement *
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/697546649' entity='client.admin' 
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:48 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:48 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 03:06:48 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 03:06:48 np0005603663 modest_kapitsa[77740]: Added label _admin to host compute-0
Jan 31 03:06:48 np0005603663 systemd[1]: libpod-83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247.scope: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77686]: 2026-01-31 08:06:48.149359388 +0000 UTC m=+0.577649287 container died 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:06:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73-merged.mount: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77686]: 2026-01-31 08:06:48.175082248 +0000 UTC m=+0.603372147 container remove 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:06:48 np0005603663 systemd[1]: libpod-conmon-83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247.scope: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77858]: 2026-01-31 08:06:48.226659871 +0000 UTC m=+0.036195014 container create 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:06:48 np0005603663 systemd[1]: Started libpod-conmon-80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd.scope.
Jan 31 03:06:48 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.300049408 +0000 UTC m=+0.040347642 container create 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:06:48 np0005603663 podman[77858]: 2026-01-31 08:06:48.213822473 +0000 UTC m=+0.023357636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:48 np0005603663 systemd[1]: Started libpod-conmon-960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068.scope.
Jan 31 03:06:48 np0005603663 podman[77858]: 2026-01-31 08:06:48.323011567 +0000 UTC m=+0.132546730 container init 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:48 np0005603663 podman[77858]: 2026-01-31 08:06:48.327922694 +0000 UTC m=+0.137457847 container start 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:06:48 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:48 np0005603663 podman[77858]: 2026-01-31 08:06:48.330772704 +0000 UTC m=+0.140307847 container attach 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.342214125 +0000 UTC m=+0.082512409 container init 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.34886727 +0000 UTC m=+0.089165524 container start 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:48 np0005603663 zealous_dirac[77907]: 167 167
Jan 31 03:06:48 np0005603663 systemd[1]: libpod-960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068.scope: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.352295912 +0000 UTC m=+0.092594196 container attach 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.35293377 +0000 UTC m=+0.093232014 container died 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.28017413 +0000 UTC m=+0.020472404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:06:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b43f384dd6307197c6f163f8bb8591a3795e3f4d3a4246c430cdf7dff30b6528-merged.mount: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77888]: 2026-01-31 08:06:48.392714489 +0000 UTC m=+0.133012753 container remove 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:06:48 np0005603663 systemd[1]: libpod-conmon-960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068.scope: Deactivated successfully.
Jan 31 03:06:48 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2926359208' entity='client.admin' 
Jan 31 03:06:48 np0005603663 nervous_chandrasekhar[77896]: set mgr/dashboard/cluster/status
Jan 31 03:06:48 np0005603663 systemd[1]: libpod-80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd.scope: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77945]: 2026-01-31 08:06:48.906575841 +0000 UTC m=+0.021558473 container died 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:48 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2926359208' entity='client.admin' 
Jan 31 03:06:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b-merged.mount: Deactivated successfully.
Jan 31 03:06:48 np0005603663 podman[77945]: 2026-01-31 08:06:48.951977682 +0000 UTC m=+0.066960244 container remove 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:48 np0005603663 systemd[1]: libpod-conmon-80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd.scope: Deactivated successfully.
Jan 31 03:06:48 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:49 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:49 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:49 np0005603663 podman[78007]: 2026-01-31 08:06:49.401921039 +0000 UTC m=+0.039454841 container create 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 03:06:49 np0005603663 systemd[1]: Started libpod-conmon-6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575.scope.
Jan 31 03:06:49 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:49 np0005603663 podman[78007]: 2026-01-31 08:06:49.385568821 +0000 UTC m=+0.023102623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:06:49 np0005603663 podman[78007]: 2026-01-31 08:06:49.496050263 +0000 UTC m=+0.133584035 container init 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:06:49 np0005603663 podman[78007]: 2026-01-31 08:06:49.514063952 +0000 UTC m=+0.151597724 container start 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:49 np0005603663 podman[78007]: 2026-01-31 08:06:49.517473422 +0000 UTC m=+0.155007264 container attach 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:06:49 np0005603663 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 03:06:49 np0005603663 python3[78053]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:49 np0005603663 podman[78059]: 2026-01-31 08:06:49.855115912 +0000 UTC m=+0.053150657 container create a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:49 np0005603663 systemd[1]: Started libpod-conmon-a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd.scope.
Jan 31 03:06:49 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:49 np0005603663 ceph-mon[75227]: Added label _admin to host compute-0
Jan 31 03:06:49 np0005603663 podman[78059]: 2026-01-31 08:06:49.836985872 +0000 UTC m=+0.035020647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abb77c8256a36f62b2cf476108ce75c830a74b1767fd278edfcca953d66893f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abb77c8256a36f62b2cf476108ce75c830a74b1767fd278edfcca953d66893f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:49 np0005603663 podman[78059]: 2026-01-31 08:06:49.944849426 +0000 UTC m=+0.142884191 container init a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:06:49 np0005603663 podman[78059]: 2026-01-31 08:06:49.951314404 +0000 UTC m=+0.149349199 container start a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:06:49 np0005603663 podman[78059]: 2026-01-31 08:06:49.95512113 +0000 UTC m=+0.153155885 container attach a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]: [
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:    {
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "available": false,
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "being_replaced": false,
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "ceph_device_lvm": false,
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "lsm_data": {},
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "lvs": [],
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "path": "/dev/sr0",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "rejected_reasons": [
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "Insufficient space (<5GB)",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "Has a FileSystem"
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        ],
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        "sys_api": {
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "actuators": null,
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "device_nodes": [
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:                "sr0"
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            ],
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "devname": "sr0",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "human_readable_size": "482.00 KB",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "id_bus": "ata",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "model": "QEMU DVD-ROM",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "nr_requests": "2",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "parent": "/dev/sr0",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "partitions": {},
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "path": "/dev/sr0",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "removable": "1",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "rev": "2.5+",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "ro": "0",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "rotational": "1",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "sas_address": "",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "sas_device_handle": "",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "scheduler_mode": "mq-deadline",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "sectors": 0,
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "sectorsize": "2048",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "size": 493568.0,
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "support_discard": "2048",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "type": "disk",
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:            "vendor": "QEMU"
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:        }
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]:    }
Jan 31 03:06:49 np0005603663 affectionate_lovelace[78023]: ]
Jan 31 03:06:50 np0005603663 systemd[1]: libpod-6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575.scope: Deactivated successfully.
Jan 31 03:06:50 np0005603663 podman[78007]: 2026-01-31 08:06:50.013231587 +0000 UTC m=+0.650765359 container died 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:06:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay-cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128-merged.mount: Deactivated successfully.
Jan 31 03:06:50 np0005603663 podman[78007]: 2026-01-31 08:06:50.125535905 +0000 UTC m=+0.763069667 container remove 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:06:50 np0005603663 systemd[1]: libpod-conmon-6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575.scope: Deactivated successfully.
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:06:50 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 03:06:50 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 31 03:06:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3928096470' entity='client.admin' 
Jan 31 03:06:50 np0005603663 systemd[1]: libpod-a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd.scope: Deactivated successfully.
Jan 31 03:06:50 np0005603663 podman[78059]: 2026-01-31 08:06:50.389738493 +0000 UTC m=+0.587773278 container died a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:06:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2abb77c8256a36f62b2cf476108ce75c830a74b1767fd278edfcca953d66893f-merged.mount: Deactivated successfully.
Jan 31 03:06:50 np0005603663 podman[78059]: 2026-01-31 08:06:50.432121259 +0000 UTC m=+0.630156014 container remove a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:50 np0005603663 systemd[1]: libpod-conmon-a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd.scope: Deactivated successfully.
Jan 31 03:06:50 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 03:06:50 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 03:06:50 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:51 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 03:06:51 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3928096470' entity='client.admin' 
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 03:06:51 np0005603663 ansible-async_wrapper.py[79353]: Invoked with j295523863277 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846810.7880495-36661-99770731538985/AnsiballZ_command.py _
Jan 31 03:06:51 np0005603663 ansible-async_wrapper.py[79444]: Starting module and watcher
Jan 31 03:06:51 np0005603663 ansible-async_wrapper.py[79444]: Start watching 79449 (30)
Jan 31 03:06:51 np0005603663 ansible-async_wrapper.py[79449]: Start module (79449)
Jan 31 03:06:51 np0005603663 ansible-async_wrapper.py[79353]: Return async_wrapper task started.
Jan 31 03:06:51 np0005603663 python3[79452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:51 np0005603663 podman[79508]: 2026-01-31 08:06:51.542399985 +0000 UTC m=+0.064906586 container create 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:06:51 np0005603663 systemd[1]: Started libpod-conmon-6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996.scope.
Jan 31 03:06:51 np0005603663 podman[79508]: 2026-01-31 08:06:51.513515147 +0000 UTC m=+0.036021808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:51 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 03:06:51 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 03:06:51 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:51 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeefba90c81fc89f486c9258bb8221d546620da327ed286a3e6b5ac68b85ace/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:51 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeefba90c81fc89f486c9258bb8221d546620da327ed286a3e6b5ac68b85ace/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:51 np0005603663 ceph-mgr[75519]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 03:06:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:06:51 np0005603663 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 03:06:51 np0005603663 podman[79508]: 2026-01-31 08:06:51.655831006 +0000 UTC m=+0.178337687 container init 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:51 np0005603663 podman[79508]: 2026-01-31 08:06:51.662378771 +0000 UTC m=+0.184885342 container start 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:06:51 np0005603663 podman[79508]: 2026-01-31 08:06:51.667108082 +0000 UTC m=+0.189614693 container attach 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:52 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 03:06:52 np0005603663 interesting_dewdney[79572]: 
Jan 31 03:06:52 np0005603663 interesting_dewdney[79572]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 03:06:52 np0005603663 systemd[1]: libpod-6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996.scope: Deactivated successfully.
Jan 31 03:06:52 np0005603663 podman[79508]: 2026-01-31 08:06:52.121089436 +0000 UTC m=+0.643596037 container died 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:06:52 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ceeefba90c81fc89f486c9258bb8221d546620da327ed286a3e6b5ac68b85ace-merged.mount: Deactivated successfully.
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:52 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 7dad7b7c-6b45-47d3-b703-f49deb4cfbb8 (Updating crash deployment (+1 -> 1))
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:52 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 03:06:52 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 03:06:52 np0005603663 podman[79508]: 2026-01-31 08:06:52.176944788 +0000 UTC m=+0.699451349 container remove 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:52 np0005603663 systemd[1]: libpod-conmon-6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996.scope: Deactivated successfully.
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 03:06:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 03:06:52 np0005603663 ansible-async_wrapper.py[79449]: Module complete (79449)
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.703913172 +0000 UTC m=+0.062982801 container create 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:06:52 np0005603663 python3[79958]: ansible-ansible.legacy.async_status Invoked with jid=j295523863277.79353 mode=status _async_dir=/root/.ansible_async
Jan 31 03:06:52 np0005603663 systemd[1]: Started libpod-conmon-4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15.scope.
Jan 31 03:06:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:52 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.769872803 +0000 UTC m=+0.128942432 container init 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.676937658 +0000 UTC m=+0.036007357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.774285025 +0000 UTC m=+0.133354634 container start 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:52 np0005603663 focused_beaver[79990]: 167 167
Jan 31 03:06:52 np0005603663 systemd[1]: libpod-4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15.scope: Deactivated successfully.
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.778243995 +0000 UTC m=+0.137313654 container attach 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.778936448 +0000 UTC m=+0.138006067 container died 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:06:52 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6a7572c4b5d9ea0807cb21646d38c27bc5a59e8d5e32031f8f7a6b65de39a1e0-merged.mount: Deactivated successfully.
Jan 31 03:06:52 np0005603663 podman[79974]: 2026-01-31 08:06:52.812549066 +0000 UTC m=+0.171618715 container remove 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:52 np0005603663 systemd[1]: libpod-conmon-4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15.scope: Deactivated successfully.
Jan 31 03:06:52 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:52 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:52 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:53 np0005603663 python3[80057]: ansible-ansible.legacy.async_status Invoked with jid=j295523863277.79353 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 03:06:53 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:53 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:53 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: Deploying daemon crash.compute-0 on compute-0
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:06:53 np0005603663 systemd[1]: Starting Ceph crash.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:06:53 np0005603663 python3[80169]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 03:06:53 np0005603663 podman[80210]: 2026-01-31 08:06:53.554680237 +0000 UTC m=+0.051074427 container create a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:06:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:53 np0005603663 podman[80210]: 2026-01-31 08:06:53.527505185 +0000 UTC m=+0.023899375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:06:53 np0005603663 podman[80210]: 2026-01-31 08:06:53.623668684 +0000 UTC m=+0.120062934 container init a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:53 np0005603663 podman[80210]: 2026-01-31 08:06:53.633143316 +0000 UTC m=+0.129537506 container start a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:53 np0005603663 bash[80210]: a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c
Jan 31 03:06:53 np0005603663 systemd[1]: Started Ceph crash.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:53 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 7dad7b7c-6b45-47d3-b703-f49deb4cfbb8 (Updating crash deployment (+1 -> 1))
Jan 31 03:06:53 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 7dad7b7c-6b45-47d3-b703-f49deb4cfbb8 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:53 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 475dc65f-8be8-43cf-bedc-fc1250554d70 (Updating mgr deployment (+1 -> 2))
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr services"} : dispatch
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:53 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.mdykbc on compute-0
Jan 31 03:06:53 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.mdykbc on compute-0
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.806+0000 7f3d3cc71640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.806+0000 7f3d3cc71640 -1 AuthRegistry(0x7f3d38052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.808+0000 7f3d3cc71640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.808+0000 7f3d3cc71640 -1 AuthRegistry(0x7f3d3cc6ffe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.809+0000 7f3d36575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.809+0000 7f3d3cc71640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 03:06:53 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 03:06:53 np0005603663 python3[80283]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:54.025683861 +0000 UTC m=+0.052079640 container create 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:54 np0005603663 systemd[1]: Started libpod-conmon-97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb.scope.
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:53.999388428 +0000 UTC m=+0.025784267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:54 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:54.120719017 +0000 UTC m=+0.147114846 container init 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:54.127210938 +0000 UTC m=+0.153606687 container start 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:54.138548519 +0000 UTC m=+0.164944288 container attach 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.305628951 +0000 UTC m=+0.044884635 container create b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:06:54 np0005603663 systemd[1]: Started libpod-conmon-b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a.scope.
Jan 31 03:06:54 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.285111094 +0000 UTC m=+0.024366808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.42442926 +0000 UTC m=+0.163685024 container init b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.430670677 +0000 UTC m=+0.169926361 container start b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:54 np0005603663 suspicious_gates[80412]: 167 167
Jan 31 03:06:54 np0005603663 systemd[1]: libpod-b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a.scope: Deactivated successfully.
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.437174619 +0000 UTC m=+0.176430423 container attach b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.437645642 +0000 UTC m=+0.176901366 container died b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:54 np0005603663 systemd[1]: var-lib-containers-storage-overlay-60595ffe9fcdfd1270c2f8622386927961d4908d0e8956c9f19317c32ffa617c-merged.mount: Deactivated successfully.
Jan 31 03:06:54 np0005603663 podman[80387]: 2026-01-31 08:06:54.512647826 +0000 UTC m=+0.251903540 container remove b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:06:54 np0005603663 systemd[1]: libpod-conmon-b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a.scope: Deactivated successfully.
Jan 31 03:06:54 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 03:06:54 np0005603663 stupefied_montalcini[80335]: 
Jan 31 03:06:54 np0005603663 stupefied_montalcini[80335]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 03:06:54 np0005603663 systemd[1]: libpod-97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb.scope: Deactivated successfully.
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:54.587317009 +0000 UTC m=+0.613712758 container died 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:06:54 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:54 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:54 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 03:06:54 np0005603663 ceph-mon[75227]: Deploying daemon mgr.compute-0.mdykbc on compute-0
Jan 31 03:06:54 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:54 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a-merged.mount: Deactivated successfully.
Jan 31 03:06:54 np0005603663 podman[80320]: 2026-01-31 08:06:54.896534923 +0000 UTC m=+0.922930672 container remove 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:06:54 np0005603663 systemd[1]: libpod-conmon-97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb.scope: Deactivated successfully.
Jan 31 03:06:54 np0005603663 systemd[1]: Reloading.
Jan 31 03:06:55 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:06:55 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:06:55 np0005603663 systemd[1]: Starting Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:06:55 np0005603663 python3[80549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:55 np0005603663 podman[80596]: 2026-01-31 08:06:55.431025723 +0000 UTC m=+0.045306113 container create 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:55 np0005603663 podman[80595]: 2026-01-31 08:06:55.46143777 +0000 UTC m=+0.072159656 container create 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:06:55 np0005603663 podman[80596]: 2026-01-31 08:06:55.40526943 +0000 UTC m=+0.019549850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:06:55 np0005603663 podman[80595]: 2026-01-31 08:06:55.409178685 +0000 UTC m=+0.019900591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:55 np0005603663 systemd[1]: Started libpod-conmon-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope.
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/var/lib/ceph/mgr/ceph-compute-0.mdykbc supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:55 np0005603663 podman[80596]: 2026-01-31 08:06:55.530140271 +0000 UTC m=+0.144420671 container init 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:06:55 np0005603663 podman[80596]: 2026-01-31 08:06:55.538867945 +0000 UTC m=+0.153148335 container start 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:55 np0005603663 podman[80595]: 2026-01-31 08:06:55.542645499 +0000 UTC m=+0.153367365 container init 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:06:55 np0005603663 podman[80595]: 2026-01-31 08:06:55.548057441 +0000 UTC m=+0.158779307 container start 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:06:55 np0005603663 bash[80596]: 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e
Jan 31 03:06:55 np0005603663 systemd[1]: Started Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:06:55 np0005603663 podman[80595]: 2026-01-31 08:06:55.563636718 +0000 UTC m=+0.174358604 container attach 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:55 np0005603663 ceph-mgr[80633]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:06:55 np0005603663 ceph-mgr[80633]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 03:06:55 np0005603663 ceph-mgr[80633]: pidfile_write: ignore empty --pid-file
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:55 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'alerts'
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 03:06:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 475dc65f-8be8-43cf-bedc-fc1250554d70 (Updating mgr deployment (+1 -> 2))
Jan 31 03:06:55 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 475dc65f-8be8-43cf-bedc-fc1250554d70 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:55 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'balancer'
Jan 31 03:06:55 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'cephadm'
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220492711' entity='client.admin' 
Jan 31 03:06:56 np0005603663 systemd[1]: libpod-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope: Deactivated successfully.
Jan 31 03:06:56 np0005603663 conmon[80628]: conmon 8e5be85540dd80bd7d6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope/container/memory.events
Jan 31 03:06:56 np0005603663 podman[80595]: 2026-01-31 08:06:56.06551523 +0000 UTC m=+0.676237086 container died 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:06:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77-merged.mount: Deactivated successfully.
Jan 31 03:06:56 np0005603663 podman[80595]: 2026-01-31 08:06:56.106225304 +0000 UTC m=+0.716947160 container remove 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:56 np0005603663 systemd[1]: libpod-conmon-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope: Deactivated successfully.
Jan 31 03:06:56 np0005603663 podman[80811]: 2026-01-31 08:06:56.267116862 +0000 UTC m=+0.063140275 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:56 np0005603663 ansible-async_wrapper.py[79444]: Done in kid B.
Jan 31 03:06:56 np0005603663 podman[80811]: 2026-01-31 08:06:56.360994773 +0000 UTC m=+0.157018156 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:56 np0005603663 python3[80863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:56 np0005603663 podman[80887]: 2026-01-31 08:06:56.457388064 +0000 UTC m=+0.031690525 container create 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:56 np0005603663 systemd[1]: Started libpod-conmon-1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e.scope.
Jan 31 03:06:56 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'crash'
Jan 31 03:06:56 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:56 np0005603663 podman[80887]: 2026-01-31 08:06:56.523924007 +0000 UTC m=+0.098226498 container init 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:06:56 np0005603663 podman[80887]: 2026-01-31 08:06:56.539410796 +0000 UTC m=+0.113713277 container start 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:06:56 np0005603663 podman[80887]: 2026-01-31 08:06:56.443027057 +0000 UTC m=+0.017329528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:56 np0005603663 podman[80887]: 2026-01-31 08:06:56.548580821 +0000 UTC m=+0.122883292 container attach 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:06:56 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'dashboard'
Jan 31 03:06:56 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:56 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 03:06:56 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:56 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 03:06:56 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 31 03:06:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1768084644' entity='client.admin' 
Jan 31 03:06:56 np0005603663 systemd[1]: libpod-1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e.scope: Deactivated successfully.
Jan 31 03:06:56 np0005603663 podman[81042]: 2026-01-31 08:06:56.985335898 +0000 UTC m=+0.018536458 container died 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:06:57 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96-merged.mount: Deactivated successfully.
Jan 31 03:06:57 np0005603663 podman[81042]: 2026-01-31 08:06:57.012864822 +0000 UTC m=+0.046065352 container remove 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:57 np0005603663 systemd[1]: libpod-conmon-1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e.scope: Deactivated successfully.
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3220492711' entity='client.admin' 
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1768084644' entity='client.admin' 
Jan 31 03:06:57 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'devicehealth'
Jan 31 03:06:57 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.330417804 +0000 UTC m=+0.045946421 container create 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:57 np0005603663 systemd[1]: Started libpod-conmon-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope.
Jan 31 03:06:57 np0005603663 python3[81113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:57 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.313183386 +0000 UTC m=+0.028712033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.418512569 +0000 UTC m=+0.134041196 container init 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.423038241 +0000 UTC m=+0.138566858 container start 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:57 np0005603663 fervent_dewdney[81139]: 167 167
Jan 31 03:06:57 np0005603663 systemd[1]: libpod-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope: Deactivated successfully.
Jan 31 03:06:57 np0005603663 conmon[81139]: conmon 34f39514043738930a81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope/container/memory.events
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.429498949 +0000 UTC m=+0.145027586 container attach 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.430699018 +0000 UTC m=+0.146227635 container died 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:06:57 np0005603663 systemd[1]: var-lib-containers-storage-overlay-53632e5d667fe8d1cf05b20e665c9aed781506251b4b3a5ec4fd625640e23d31-merged.mount: Deactivated successfully.
Jan 31 03:06:57 np0005603663 podman[81123]: 2026-01-31 08:06:57.473673928 +0000 UTC m=+0.189202555 container remove 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:06:57 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc[80626]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 03:06:57 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc[80626]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 03:06:57 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc[80626]:  from numpy import show_config as show_numpy_config
Jan 31 03:06:57 np0005603663 systemd[1]: libpod-conmon-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope: Deactivated successfully.
Jan 31 03:06:57 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'influx'
Jan 31 03:06:57 np0005603663 podman[81142]: 2026-01-31 08:06:57.495727935 +0000 UTC m=+0.083493798 container create 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:06:57 np0005603663 systemd[1]: Started libpod-conmon-187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f.scope.
Jan 31 03:06:57 np0005603663 podman[81142]: 2026-01-31 08:06:57.449559344 +0000 UTC m=+0.037325277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:57 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:57 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'insights'
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:57 np0005603663 podman[81142]: 2026-01-31 08:06:57.57612964 +0000 UTC m=+0.163895573 container init 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:57 np0005603663 podman[81142]: 2026-01-31 08:06:57.580544122 +0000 UTC m=+0.168310005 container start 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:57 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fqetdi (unknown last config time)...
Jan 31 03:06:57 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fqetdi (unknown last config time)...
Jan 31 03:06:57 np0005603663 podman[81142]: 2026-01-31 08:06:57.587880209 +0000 UTC m=+0.175646062 container attach 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqetdi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqetdi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr services"} : dispatch
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:57 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fqetdi on compute-0
Jan 31 03:06:57 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fqetdi on compute-0
Jan 31 03:06:57 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'iostat'
Jan 31 03:06:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:06:57 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'k8sevents'
Jan 31 03:06:57 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 2 completed events
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 31 03:06:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:58.014739486 +0000 UTC m=+0.059313928 container create 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqetdi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 03:06:58 np0005603663 systemd[1]: Started libpod-conmon-7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3.scope.
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'localpool'
Jan 31 03:06:58 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:57.979851932 +0000 UTC m=+0.024426464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:58.085327958 +0000 UTC m=+0.129902450 container init 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:58.09468552 +0000 UTC m=+0.139259962 container start 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:06:58 np0005603663 elastic_shannon[81275]: 167 167
Jan 31 03:06:58 np0005603663 systemd[1]: libpod-7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3.scope: Deactivated successfully.
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:58.09919843 +0000 UTC m=+0.143772932 container attach 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:58.099973941 +0000 UTC m=+0.144548393 container died 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:06:58 np0005603663 systemd[1]: var-lib-containers-storage-overlay-be31eb65e9b9bf899f577e2733f5ccba3b3f26bbc8c1563de5837919a22489d7-merged.mount: Deactivated successfully.
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 03:06:58 np0005603663 podman[81258]: 2026-01-31 08:06:58.140667113 +0000 UTC m=+0.185241555 container remove 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:58 np0005603663 systemd[1]: libpod-conmon-7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3.scope: Deactivated successfully.
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'mirroring'
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'nfs'
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'orchestrator'
Jan 31 03:06:58 np0005603663 podman[81388]: 2026-01-31 08:06:58.748859738 +0000 UTC m=+0.048073915 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:06:58 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 03:06:58 np0005603663 magical_taussig[81170]: set require_min_compat_client to mimic
Jan 31 03:06:58 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 03:06:58 np0005603663 systemd[1]: libpod-187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f.scope: Deactivated successfully.
Jan 31 03:06:58 np0005603663 podman[81142]: 2026-01-31 08:06:58.794639423 +0000 UTC m=+1.382405286 container died 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:06:58 np0005603663 systemd[1]: var-lib-containers-storage-overlay-454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110-merged.mount: Deactivated successfully.
Jan 31 03:06:58 np0005603663 podman[81142]: 2026-01-31 08:06:58.838650688 +0000 UTC m=+1.426416541 container remove 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:58 np0005603663 systemd[1]: libpod-conmon-187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f.scope: Deactivated successfully.
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 03:06:58 np0005603663 podman[81388]: 2026-01-31 08:06:58.855826 +0000 UTC m=+0.155040177 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'osd_support'
Jan 31 03:06:58 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: Reconfiguring mgr.compute-0.fqetdi (unknown last config time)...
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: Reconfiguring daemon mgr.compute-0.fqetdi on compute-0
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 03:06:59 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'progress'
Jan 31 03:06:59 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'prometheus'
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:06:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:06:59 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'rbd_support'
Jan 31 03:06:59 np0005603663 python3[81540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:06:59 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'rgw'
Jan 31 03:06:59 np0005603663 podman[81566]: 2026-01-31 08:06:59.529202365 +0000 UTC m=+0.058060523 container create 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:59 np0005603663 systemd[1]: Started libpod-conmon-04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72.scope.
Jan 31 03:06:59 np0005603663 podman[81566]: 2026-01-31 08:06:59.504547252 +0000 UTC m=+0.033405470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:06:59 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:06:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:06:59 np0005603663 podman[81566]: 2026-01-31 08:06:59.670340906 +0000 UTC m=+0.199199134 container init 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 03:06:59 np0005603663 podman[81566]: 2026-01-31 08:06:59.677770772 +0000 UTC m=+0.206628940 container start 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:06:59 np0005603663 podman[81566]: 2026-01-31 08:06:59.708464815 +0000 UTC m=+0.237323043 container attach 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:06:59 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'rook'
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'selftest'
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'smb'
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Added host compute-0
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'snap_schedule'
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 7708b38e-1751-4b1c-a8da-60377aa4f99e (Updating mgr deployment (-1 -> 1))
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.mdykbc from compute-0 -- ports [8765]
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.mdykbc from compute-0 -- ports [8765]
Jan 31 03:07:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:00 np0005603663 stupefied_carver[81581]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 03:07:00 np0005603663 stupefied_carver[81581]: Scheduled mon update...
Jan 31 03:07:00 np0005603663 stupefied_carver[81581]: Scheduled mgr update...
Jan 31 03:07:00 np0005603663 stupefied_carver[81581]: Scheduled osd.default_drive_group update...
Jan 31 03:07:00 np0005603663 systemd[1]: libpod-04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72.scope: Deactivated successfully.
Jan 31 03:07:00 np0005603663 podman[81566]: 2026-01-31 08:07:00.601240652 +0000 UTC m=+1.130098810 container died 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:00 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764-merged.mount: Deactivated successfully.
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'stats'
Jan 31 03:07:00 np0005603663 podman[81566]: 2026-01-31 08:07:00.64474227 +0000 UTC m=+1.173600428 container remove 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:07:00 np0005603663 systemd[1]: libpod-conmon-04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72.scope: Deactivated successfully.
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'status'
Jan 31 03:07:00 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'telegraf'
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'telemetry'
Jan 31 03:07:00 np0005603663 systemd[1]: Stopping Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:07:00 np0005603663 ceph-mgr[80633]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 03:07:01 np0005603663 python3[81779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:01 np0005603663 podman[81804]: 2026-01-31 08:07:01.097548287 +0000 UTC m=+0.062948408 container died 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.11340088 +0000 UTC m=+0.039905582 container create cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 03:07:01 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3-merged.mount: Deactivated successfully.
Jan 31 03:07:01 np0005603663 podman[81804]: 2026-01-31 08:07:01.138738095 +0000 UTC m=+0.104138226 container remove 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:01 np0005603663 bash[81804]: ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc
Jan 31 03:07:01 np0005603663 systemd[1]: Started libpod-conmon-cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec.scope.
Jan 31 03:07:01 np0005603663 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.mdykbc.service: Main process exited, code=exited, status=143/n/a
Jan 31 03:07:01 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.186672766 +0000 UTC m=+0.113177488 container init cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.093572086 +0000 UTC m=+0.020076878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.191226391 +0000 UTC m=+0.117731103 container start cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.194269697 +0000 UTC m=+0.120774429 container attach cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:01 np0005603663 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.mdykbc.service: Failed with result 'exit-code'.
Jan 31 03:07:01 np0005603663 systemd[1]: Stopped Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:07:01 np0005603663 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.mdykbc.service: Consumed 6.173s CPU time, 426.0M memory peak, read 0B from disk, written 188.5K to disk.
Jan 31 03:07:01 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:01 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:01 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: Added host compute-0
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: Saving service mon spec with placement compute-0
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: Saving service mgr spec with placement compute-0
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: Saving service osd.default_drive_group spec with placement compute-0
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: Removing daemon mgr.compute-0.mdykbc from compute-0 -- ports [8765]
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.mdykbc
Jan 31 03:07:01 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.mdykbc
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"} v 0)
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"} : dispatch
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"}]': finished
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 7708b38e-1751-4b1c-a8da-60377aa4f99e (Updating mgr deployment (-1 -> 1))
Jan 31 03:07:01 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 7708b38e-1751-4b1c-a8da-60377aa4f99e (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 03:07:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292455884' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 03:07:01 np0005603663 adoring_kapitsa[81849]: 
Jan 31 03:07:01 np0005603663 adoring_kapitsa[81849]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":48,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T08:06:11:330734+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T08:06:11.333031+0000","services":{}},"progress_events":{}}
Jan 31 03:07:01 np0005603663 systemd[1]: libpod-cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec.scope: Deactivated successfully.
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.735849882 +0000 UTC m=+0.662354624 container died cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:01 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63-merged.mount: Deactivated successfully.
Jan 31 03:07:01 np0005603663 podman[81820]: 2026-01-31 08:07:01.777160571 +0000 UTC m=+0.703665273 container remove cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:01 np0005603663 systemd[1]: libpod-conmon-cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec.scope: Deactivated successfully.
Jan 31 03:07:02 np0005603663 podman[82077]: 2026-01-31 08:07:02.139459894 +0000 UTC m=+0.055629883 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:07:02 np0005603663 podman[82077]: 2026-01-31 08:07:02.238717884 +0000 UTC m=+0.154887873 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: Removing key for mgr.compute-0.mdykbc
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"}]': finished
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 3 completed events
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:02 np0005603663 podman[82238]: 2026-01-31 08:07:02.888807701 +0000 UTC m=+0.030151494 container create 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:07:02 np0005603663 systemd[1]: Started libpod-conmon-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope.
Jan 31 03:07:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:02 np0005603663 podman[82238]: 2026-01-31 08:07:02.946067621 +0000 UTC m=+0.087411414 container init 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:07:02 np0005603663 podman[82238]: 2026-01-31 08:07:02.950487103 +0000 UTC m=+0.091830886 container start 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:07:02 np0005603663 strange_sutherland[82254]: 167 167
Jan 31 03:07:02 np0005603663 systemd[1]: libpod-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope: Deactivated successfully.
Jan 31 03:07:02 np0005603663 conmon[82254]: conmon 947a4fec9e17e6f25f03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope/container/memory.events
Jan 31 03:07:02 np0005603663 podman[82238]: 2026-01-31 08:07:02.958675698 +0000 UTC m=+0.100019531 container attach 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:02 np0005603663 podman[82238]: 2026-01-31 08:07:02.959003948 +0000 UTC m=+0.100347771 container died 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:07:02 np0005603663 podman[82238]: 2026-01-31 08:07:02.875462987 +0000 UTC m=+0.016806800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b6a74aeacc6144366873e0827827a2b51170899bee32ab2ad7f35a12215c64f8-merged.mount: Deactivated successfully.
Jan 31 03:07:03 np0005603663 podman[82238]: 2026-01-31 08:07:03.009435666 +0000 UTC m=+0.150779459 container remove 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:07:03 np0005603663 systemd[1]: libpod-conmon-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope: Deactivated successfully.
Jan 31 03:07:03 np0005603663 podman[82276]: 2026-01-31 08:07:03.12730711 +0000 UTC m=+0.036932322 container create 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:07:03 np0005603663 systemd[1]: Started libpod-conmon-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope.
Jan 31 03:07:03 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:03 np0005603663 podman[82276]: 2026-01-31 08:07:03.108572795 +0000 UTC m=+0.018198057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:03 np0005603663 podman[82276]: 2026-01-31 08:07:03.22226922 +0000 UTC m=+0.131894462 container init 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:03 np0005603663 podman[82276]: 2026-01-31 08:07:03.229165597 +0000 UTC m=+0.138790839 container start 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:07:03 np0005603663 podman[82276]: 2026-01-31 08:07:03.232152379 +0000 UTC m=+0.141777601 container attach 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:07:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:03 np0005603663 epic_aryabhata[82292]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:07:03 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:03 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:03 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 39c36249-2898-4a76-b317-8e4ca379866f
Jan 31 03:07:03 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"} v 0)
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"} : dispatch
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"}]': finished
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:04 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 03:07:04 np0005603663 lvm[82386]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:04 np0005603663 lvm[82386]: VG ceph_vg0 finished
Jan 31 03:07:04 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"} : dispatch
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"}]': finished
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 03:07:04 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434991703' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: stderr: got monmap epoch 1
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: --> Creating keyring file for osd.0
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 03:07:04 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 39c36249-2898-4a76-b317-8e4ca379866f --setuser ceph --setgroup ceph
Jan 31 03:07:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: stderr: 2026-01-31T08:07:04.982+0000 7fbc6ac288c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: stderr: 2026-01-31T08:07:05.002+0000 7fbc6ac288c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 03:07:05 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 03:07:05 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 03:07:05 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new dacad4fa-56d8-4937-b2d8-306fb75187f3
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"} v 0)
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"} : dispatch
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"}]': finished
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:06 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:06 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:06 np0005603663 lvm[83338]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:07:06 np0005603663 lvm[83338]: VG ceph_vg1 finished
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:06 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 03:07:06 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: Cluster is now healthy
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"} : dispatch
Jan 31 03:07:06 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"}]': finished
Jan 31 03:07:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 03:07:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829006065' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: stderr: got monmap epoch 1
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: --> Creating keyring file for osd.1
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid dacad4fa-56d8-4937-b2d8-306fb75187f3 --setuser ceph --setgroup ceph
Jan 31 03:07:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: stderr: 2026-01-31T08:07:07.185+0000 7f1834a1f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: stderr: 2026-01-31T08:07:07.207+0000 7f1834a1f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 03:07:07 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new faa25865-e7b6-44f9-8188-08bf287b941b
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"} v 0)
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"} : dispatch
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"}]': finished
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:08 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:08 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:08 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:08 np0005603663 lvm[84292]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:08 np0005603663 lvm[84292]: VG ceph_vg2 finished
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:08 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 31 03:07:08 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"} : dispatch
Jan 31 03:07:08 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"}]': finished
Jan 31 03:07:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 03:07:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071841395' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 03:07:09 np0005603663 epic_aryabhata[82292]: stderr: got monmap epoch 1
Jan 31 03:07:09 np0005603663 epic_aryabhata[82292]: --> Creating keyring file for osd.2
Jan 31 03:07:09 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 31 03:07:09 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 31 03:07:09 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid faa25865-e7b6-44f9-8188-08bf287b941b --setuser ceph --setgroup ceph
Jan 31 03:07:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: stderr: 2026-01-31T08:07:09.362+0000 7fefb56948c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: stderr: 2026-01-31T08:07:09.382+0000 7fefb56948c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 03:07:10 np0005603663 epic_aryabhata[82292]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 31 03:07:10 np0005603663 systemd[1]: libpod-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope: Deactivated successfully.
Jan 31 03:07:10 np0005603663 systemd[1]: libpod-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope: Consumed 5.399s CPU time.
Jan 31 03:07:10 np0005603663 podman[85216]: 2026-01-31 08:07:10.699271794 +0000 UTC m=+0.022217715 container died 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:10 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:10 np0005603663 systemd[1]: var-lib-containers-storage-overlay-234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf-merged.mount: Deactivated successfully.
Jan 31 03:07:10 np0005603663 podman[85216]: 2026-01-31 08:07:10.852160666 +0000 UTC m=+0.175106577 container remove 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:10 np0005603663 systemd[1]: libpod-conmon-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope: Deactivated successfully.
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.235759831 +0000 UTC m=+0.030763919 container create f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:11 np0005603663 systemd[1]: Started libpod-conmon-f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052.scope.
Jan 31 03:07:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.313320354 +0000 UTC m=+0.108324462 container init f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.318978545 +0000 UTC m=+0.113982633 container start f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.222785751 +0000 UTC m=+0.017789859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.321930869 +0000 UTC m=+0.116934957 container attach f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:07:11 np0005603663 hungry_zhukovsky[85310]: 167 167
Jan 31 03:07:11 np0005603663 systemd[1]: libpod-f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052.scope: Deactivated successfully.
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.323260447 +0000 UTC m=+0.118264535 container died f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3c0bb43aeca8c4ce984e7a5b0ce83cc184130dab29992b8e3dcb7d4c32375fc8-merged.mount: Deactivated successfully.
Jan 31 03:07:11 np0005603663 podman[85293]: 2026-01-31 08:07:11.35910489 +0000 UTC m=+0.154109008 container remove f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:11 np0005603663 systemd[1]: libpod-conmon-f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052.scope: Deactivated successfully.
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.493157435 +0000 UTC m=+0.039078046 container create b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:07:11 np0005603663 systemd[1]: Started libpod-conmon-b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae.scope.
Jan 31 03:07:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.472886546 +0000 UTC m=+0.018807137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.645607594 +0000 UTC m=+0.191528175 container init b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.650167884 +0000 UTC m=+0.196088455 container start b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.652962074 +0000 UTC m=+0.198882645 container attach b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]: {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:    "0": [
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:        {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "devices": [
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "/dev/loop3"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            ],
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_name": "ceph_lv0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_size": "21470642176",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "name": "ceph_lv0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "tags": {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.crush_device_class": "",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.encrypted": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osd_id": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.type": "block",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.vdo": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.with_tpm": "0"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            },
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "type": "block",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "vg_name": "ceph_vg0"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:        }
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:    ],
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:    "1": [
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:        {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "devices": [
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "/dev/loop4"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            ],
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_name": "ceph_lv1",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_size": "21470642176",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "name": "ceph_lv1",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "tags": {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.crush_device_class": "",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.encrypted": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osd_id": "1",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.type": "block",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.vdo": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.with_tpm": "0"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            },
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "type": "block",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "vg_name": "ceph_vg1"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:        }
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:    ],
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:    "2": [
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:        {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "devices": [
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "/dev/loop5"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            ],
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_name": "ceph_lv2",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_size": "21470642176",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "name": "ceph_lv2",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "tags": {
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.crush_device_class": "",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.encrypted": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osd_id": "2",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.type": "block",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.vdo": "0",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:                "ceph.with_tpm": "0"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            },
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "type": "block",
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:            "vg_name": "ceph_vg2"
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:        }
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]:    ]
Jan 31 03:07:11 np0005603663 thirsty_lamport[85352]: }
Jan 31 03:07:11 np0005603663 systemd[1]: libpod-b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae.scope: Deactivated successfully.
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.915437573 +0000 UTC m=+0.461358144 container died b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 03:07:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da-merged.mount: Deactivated successfully.
Jan 31 03:07:11 np0005603663 podman[85335]: 2026-01-31 08:07:11.961220159 +0000 UTC m=+0.507140740 container remove b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:07:11 np0005603663 systemd[1]: libpod-conmon-b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae.scope: Deactivated successfully.
Jan 31 03:07:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 31 03:07:12 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 03:07:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:12 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:12 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 03:07:12 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.507394042 +0000 UTC m=+0.043550504 container create 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:12 np0005603663 systemd[1]: Started libpod-conmon-0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22.scope.
Jan 31 03:07:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.582202256 +0000 UTC m=+0.118358728 container init 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.488689488 +0000 UTC m=+0.024845970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.587195629 +0000 UTC m=+0.123352071 container start 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:12 np0005603663 admiring_ride[85480]: 167 167
Jan 31 03:07:12 np0005603663 systemd[1]: libpod-0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22.scope: Deactivated successfully.
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.592404908 +0000 UTC m=+0.128561350 container attach 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.592746357 +0000 UTC m=+0.128902799 container died 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-1b2ef0344c615f3a1b61d93405ab647291ec13b22fc6c3cba569c0368bec3113-merged.mount: Deactivated successfully.
Jan 31 03:07:12 np0005603663 podman[85463]: 2026-01-31 08:07:12.628519008 +0000 UTC m=+0.164675450 container remove 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:12 np0005603663 systemd[1]: libpod-conmon-0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22.scope: Deactivated successfully.
Jan 31 03:07:12 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:12 np0005603663 podman[85510]: 2026-01-31 08:07:12.845589671 +0000 UTC m=+0.044563592 container create 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:07:12 np0005603663 systemd[1]: Started libpod-conmon-34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262.scope.
Jan 31 03:07:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 03:07:12 np0005603663 ceph-mon[75227]: Deploying daemon osd.0 on compute-0
Jan 31 03:07:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:12 np0005603663 podman[85510]: 2026-01-31 08:07:12.917559984 +0000 UTC m=+0.116533905 container init 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:12 np0005603663 podman[85510]: 2026-01-31 08:07:12.82344859 +0000 UTC m=+0.022422571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:12 np0005603663 podman[85510]: 2026-01-31 08:07:12.924229145 +0000 UTC m=+0.123203076 container start 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:12 np0005603663 podman[85510]: 2026-01-31 08:07:12.929573488 +0000 UTC m=+0.128547379 container attach 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:07:13 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test[85526]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 03:07:13 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test[85526]:                            [--no-systemd] [--no-tmpfs]
Jan 31 03:07:13 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test[85526]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 03:07:13 np0005603663 systemd[1]: libpod-34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262.scope: Deactivated successfully.
Jan 31 03:07:13 np0005603663 podman[85510]: 2026-01-31 08:07:13.145771196 +0000 UTC m=+0.344745097 container died 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:13 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd-merged.mount: Deactivated successfully.
Jan 31 03:07:13 np0005603663 podman[85510]: 2026-01-31 08:07:13.197005767 +0000 UTC m=+0.395979658 container remove 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:07:13 np0005603663 systemd[1]: libpod-conmon-34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262.scope: Deactivated successfully.
Jan 31 03:07:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:13 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:13 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:13 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:13 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:13 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:13 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:13 np0005603663 systemd[1]: Starting Ceph osd.0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:07:14 np0005603663 podman[85689]: 2026-01-31 08:07:14.083037636 +0000 UTC m=+0.040329872 container create fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:07:14 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:14 np0005603663 podman[85689]: 2026-01-31 08:07:14.062936203 +0000 UTC m=+0.020228519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:14 np0005603663 podman[85689]: 2026-01-31 08:07:14.160593009 +0000 UTC m=+0.117885325 container init fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:07:14 np0005603663 podman[85689]: 2026-01-31 08:07:14.168277148 +0000 UTC m=+0.125569394 container start fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:07:14 np0005603663 podman[85689]: 2026-01-31 08:07:14.171386007 +0000 UTC m=+0.128678253 container attach fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:14 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:14 np0005603663 lvm[85787]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:14 np0005603663 lvm[85787]: VG ceph_vg0 finished
Jan 31 03:07:14 np0005603663 lvm[85790]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:07:14 np0005603663 lvm[85790]: VG ceph_vg1 finished
Jan 31 03:07:14 np0005603663 lvm[85792]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:14 np0005603663 lvm[85792]: VG ceph_vg2 finished
Jan 31 03:07:14 np0005603663 lvm[85793]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:14 np0005603663 lvm[85793]: VG ceph_vg0 finished
Jan 31 03:07:14 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 03:07:14 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 bash[85689]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 03:07:14 np0005603663 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:14 np0005603663 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 03:07:15 np0005603663 bash[85689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 03:07:15 np0005603663 bash[85689]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 bash[85689]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 bash[85689]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 03:07:15 np0005603663 bash[85689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 03:07:15 np0005603663 bash[85689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 03:07:15 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 03:07:15 np0005603663 bash[85689]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 03:07:15 np0005603663 systemd[1]: libpod-fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f.scope: Deactivated successfully.
Jan 31 03:07:15 np0005603663 podman[85689]: 2026-01-31 08:07:15.122508233 +0000 UTC m=+1.079800489 container died fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:07:15 np0005603663 systemd[1]: libpod-fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f.scope: Consumed 1.220s CPU time.
Jan 31 03:07:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4-merged.mount: Deactivated successfully.
Jan 31 03:07:15 np0005603663 podman[85689]: 2026-01-31 08:07:15.176793782 +0000 UTC m=+1.134086028 container remove fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:15 np0005603663 podman[85951]: 2026-01-31 08:07:15.412926219 +0000 UTC m=+0.063205304 container create a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 03:07:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:15 np0005603663 podman[85951]: 2026-01-31 08:07:15.385315011 +0000 UTC m=+0.035594156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:15 np0005603663 podman[85951]: 2026-01-31 08:07:15.496599737 +0000 UTC m=+0.146878842 container init a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 03:07:15 np0005603663 podman[85951]: 2026-01-31 08:07:15.501784624 +0000 UTC m=+0.152063699 container start a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:15 np0005603663 bash[85951]: a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e
Jan 31 03:07:15 np0005603663 systemd[1]: Started Ceph osd.0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: pidfile_write: ignore empty --pid-file
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:15 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 31 03:07:15 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: load: jerasure load: lrc 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount shared_bdev_used = 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Git sha 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: DB SUMMARY
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: DB Session ID:  Z9SKTA50MPZ0LLKR730F
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                     Options.env: 0x561e0145dea0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                Options.info_log: 0x561e024b88a0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.write_buffer_manager: 0x561e014c2b40
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.row_cache: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                              Options.wal_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.wal_compression: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_background_jobs: 4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Compression algorithms supported:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kZSTD supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e01461a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e01461a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e01461a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835925371, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835926236, "job": 1, "event": "recovery_finished"}
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: freelist init
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: freelist _read_cfg
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs umount
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluefs mount shared_bdev_used = 27262976
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Git sha 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: DB SUMMARY
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: DB Session ID:  Z9SKTA50MPZ0LLKR730E
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                     Options.env: 0x561e0145dce0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                Options.info_log: 0x561e024b8960
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.write_buffer_manager: 0x561e014c2b40
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.row_cache: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                              Options.wal_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.wal_compression: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_background_jobs: 4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Compression algorithms supported:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kZSTD supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e014618d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e01461a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e01461a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:15 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561e01461a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835968973, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835978926, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa", "db_session_id": "Z9SKTA50MPZ0LLKR730E", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835991315, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa", "db_session_id": "Z9SKTA50MPZ0LLKR730E", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846836001317, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa", "db_session_id": "Z9SKTA50MPZ0LLKR730E", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846836004409, "job": 1, "event": "recovery_finished"}
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561e024ba000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: DB pointer 0x561e02672000
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usag
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: _get_class not permitted to load lua
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: _get_class not permitted to load sdk
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 load_pgs
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 load_pgs opened 0 pgs
Jan 31 03:07:16 np0005603663 ceph-osd[85971]: osd.0 0 log_to_monitors true
Jan 31 03:07:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0[85967]: 2026-01-31T08:07:16.052+0000 7fbc8d3c78c0 -1 osd.0 0 log_to_monitors true
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.074628988 +0000 UTC m=+0.033577709 container create 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:07:16 np0005603663 systemd[1]: Started libpod-conmon-549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df.scope.
Jan 31 03:07:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.059004863 +0000 UTC m=+0.017953594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.168772214 +0000 UTC m=+0.127720935 container init 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.176944638 +0000 UTC m=+0.135893359 container start 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.183382111 +0000 UTC m=+0.142330862 container attach 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:16 np0005603663 inspiring_aryabhata[86527]: 167 167
Jan 31 03:07:16 np0005603663 systemd[1]: libpod-549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df.scope: Deactivated successfully.
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.185751189 +0000 UTC m=+0.144699910 container died 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c45ed1e1ebf202f0ac94f7e949c4b1add2bf7021237bedb51eff2c1f69be5642-merged.mount: Deactivated successfully.
Jan 31 03:07:16 np0005603663 podman[86477]: 2026-01-31 08:07:16.257193667 +0000 UTC m=+0.216142398 container remove 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:07:16 np0005603663 systemd[1]: libpod-conmon-549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df.scope: Deactivated successfully.
Jan 31 03:07:16 np0005603663 podman[86558]: 2026-01-31 08:07:16.440306001 +0000 UTC m=+0.033848216 container create a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:07:16 np0005603663 systemd[1]: Started libpod-conmon-a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083.scope.
Jan 31 03:07:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:16 np0005603663 podman[86558]: 2026-01-31 08:07:16.522306601 +0000 UTC m=+0.115848836 container init a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 03:07:16 np0005603663 podman[86558]: 2026-01-31 08:07:16.425823258 +0000 UTC m=+0.019365503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:16 np0005603663 podman[86558]: 2026-01-31 08:07:16.530585157 +0000 UTC m=+0.124127412 container start a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:16 np0005603663 podman[86558]: 2026-01-31 08:07:16.534487149 +0000 UTC m=+0.128029384 container attach a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: Deploying daemon osd.1 on compute-0
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:16 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:16 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:16 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test[86574]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 03:07:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test[86574]:                            [--no-systemd] [--no-tmpfs]
Jan 31 03:07:16 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test[86574]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 03:07:16 np0005603663 systemd[1]: libpod-a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083.scope: Deactivated successfully.
Jan 31 03:07:16 np0005603663 podman[86558]: 2026-01-31 08:07:16.69648124 +0000 UTC m=+0.290023555 container died a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:16 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 03:07:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704-merged.mount: Deactivated successfully.
Jan 31 03:07:17 np0005603663 podman[86558]: 2026-01-31 08:07:17.217812395 +0000 UTC m=+0.811354610 container remove a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:07:17 np0005603663 systemd[1]: libpod-conmon-a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083.scope: Deactivated successfully.
Jan 31 03:07:17 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:17 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:17 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0 done with init, starting boot process
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0 start_boot
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 03:07:17 np0005603663 ceph-osd[85971]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:17 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:17 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:17 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:17 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:17 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 03:07:17 np0005603663 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 03:07:17 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:17 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:17 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:17 np0005603663 systemd[1]: Starting Ceph osd.1 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:07:18 np0005603663 podman[86734]: 2026-01-31 08:07:18.127993623 +0000 UTC m=+0.042412082 container create 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:07:18 np0005603663 podman[86734]: 2026-01-31 08:07:18.102509875 +0000 UTC m=+0.016928344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:18 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:18 np0005603663 podman[86734]: 2026-01-31 08:07:18.26423391 +0000 UTC m=+0.178652379 container init 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:18 np0005603663 podman[86734]: 2026-01-31 08:07:18.271377503 +0000 UTC m=+0.185795942 container start 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:07:18 np0005603663 podman[86734]: 2026-01-31 08:07:18.295568304 +0000 UTC m=+0.209986853 container attach 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:07:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:18 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:18 np0005603663 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:18 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:18 np0005603663 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:18 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:18 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:18 np0005603663 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 03:07:18 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:18 np0005603663 lvm[86835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:18 np0005603663 lvm[86836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:07:18 np0005603663 lvm[86835]: VG ceph_vg0 finished
Jan 31 03:07:18 np0005603663 lvm[86836]: VG ceph_vg1 finished
Jan 31 03:07:18 np0005603663 lvm[86838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:18 np0005603663 lvm[86838]: VG ceph_vg2 finished
Jan 31 03:07:18 np0005603663 lvm[86839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:18 np0005603663 lvm[86839]: VG ceph_vg2 finished
Jan 31 03:07:18 np0005603663 lvm[86842]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:18 np0005603663 lvm[86842]: VG ceph_vg2 finished
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 03:07:19 np0005603663 bash[86734]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 03:07:19 np0005603663 bash[86734]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 03:07:19 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 03:07:19 np0005603663 bash[86734]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 03:07:19 np0005603663 systemd[1]: libpod-1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d.scope: Deactivated successfully.
Jan 31 03:07:19 np0005603663 systemd[1]: libpod-1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d.scope: Consumed 1.164s CPU time.
Jan 31 03:07:19 np0005603663 podman[86955]: 2026-01-31 08:07:19.290900182 +0000 UTC m=+0.039361185 container died 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:07:19 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f-merged.mount: Deactivated successfully.
Jan 31 03:07:19 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:19 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:19 np0005603663 podman[86955]: 2026-01-31 08:07:19.876233122 +0000 UTC m=+0.624694105 container remove 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:07:20 np0005603663 podman[87016]: 2026-01-31 08:07:20.116416284 +0000 UTC m=+0.062261777 container create 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:20 np0005603663 podman[87016]: 2026-01-31 08:07:20.077225386 +0000 UTC m=+0.023070949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:20 np0005603663 podman[87016]: 2026-01-31 08:07:20.263695776 +0000 UTC m=+0.209541359 container init 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:07:20 np0005603663 podman[87016]: 2026-01-31 08:07:20.269183643 +0000 UTC m=+0.215029126 container start 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:07:20 np0005603663 bash[87016]: 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a
Jan 31 03:07:20 np0005603663 systemd[1]: Started Ceph osd.1 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: pidfile_write: ignore empty --pid-file
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:20 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 31 03:07:20 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: load: jerasure load: lrc 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:20 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount shared_bdev_used = 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Git sha 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: DB SUMMARY
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: DB Session ID:  5YQD9ZNBLM5EUMTKY353
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                     Options.env: 0x55d7805d5ea0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                Options.info_log: 0x55d7816268a0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.write_buffer_manager: 0x55d78063ab40
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.row_cache: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                              Options.wal_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.wal_compression: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_background_jobs: 4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Compression algorithms supported:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kZSTD supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:20 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fafa2376-40a7-4fa7-b459-89b99fa109d9
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840771470, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840772917, "job": 1, "event": "recovery_finished"}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: freelist init
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: freelist _read_cfg
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs umount
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluefs mount shared_bdev_used = 27262976
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Git sha 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: DB SUMMARY
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: DB Session ID:  5YQD9ZNBLM5EUMTKY352
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                     Options.env: 0x55d7805d5ce0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                Options.info_log: 0x55d781626a20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.write_buffer_manager: 0x55d78063ab40
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.row_cache: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                              Options.wal_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.wal_compression: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_background_jobs: 4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Compression algorithms supported:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kZSTD supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7816270c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7816270c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7816270c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d7805d9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fafa2376-40a7-4fa7-b459-89b99fa109d9
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840825236, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840890610, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fafa2376-40a7-4fa7-b459-89b99fa109d9", "db_session_id": "5YQD9ZNBLM5EUMTKY352", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840901983, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fafa2376-40a7-4fa7-b459-89b99fa109d9", "db_session_id": "5YQD9ZNBLM5EUMTKY352", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840933344, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fafa2376-40a7-4fa7-b459-89b99fa109d9", "db_session_id": "5YQD9ZNBLM5EUMTKY352", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840965462, "job": 1, "event": "recovery_finished"}
Jan 31 03:07:20 np0005603663 ceph-osd[87035]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 03:07:20 np0005603663 podman[87542]: 2026-01-31 08:07:20.977981535 +0000 UTC m=+0.071153110 container create 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:07:21 np0005603663 podman[87542]: 2026-01-31 08:07:20.927885407 +0000 UTC m=+0.021056992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:21 np0005603663 systemd[1]: Started libpod-conmon-7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0.scope.
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d78180a000
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: rocksdb: DB pointer 0x55d7817e0000
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usag
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 03:07:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: _get_class not permitted to load lua
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: _get_class not permitted to load sdk
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 load_pgs
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 load_pgs opened 0 pgs
Jan 31 03:07:21 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1[87031]: 2026-01-31T08:07:21.090+0000 7f9bcc7778c0 -1 osd.1 0 log_to_monitors true
Jan 31 03:07:21 np0005603663 ceph-osd[87035]: osd.1 0 log_to_monitors true
Jan 31 03:07:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 31 03:07:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 03:07:21 np0005603663 podman[87542]: 2026-01-31 08:07:21.549642765 +0000 UTC m=+0.642814420 container init 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:21 np0005603663 podman[87542]: 2026-01-31 08:07:21.559147276 +0000 UTC m=+0.652318891 container start 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:07:21 np0005603663 gifted_borg[87558]: 167 167
Jan 31 03:07:21 np0005603663 systemd[1]: libpod-7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0.scope: Deactivated successfully.
Jan 31 03:07:21 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:21 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:21 np0005603663 podman[87542]: 2026-01-31 08:07:21.646827388 +0000 UTC m=+0.739999053 container attach 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:21 np0005603663 podman[87542]: 2026-01-31 08:07:21.647821386 +0000 UTC m=+0.740993001 container died 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:07:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:21 np0005603663 systemd[1]: var-lib-containers-storage-overlay-97f428055667257181deafffaac6fe9300da634419039147cbdfad71092458a9-merged.mount: Deactivated successfully.
Jan 31 03:07:21 np0005603663 podman[87542]: 2026-01-31 08:07:21.920468335 +0000 UTC m=+1.013639950 container remove 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:21 np0005603663 systemd[1]: libpod-conmon-7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0.scope: Deactivated successfully.
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: Deploying daemon osd.2 on compute-0
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:22 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:22 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:22 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:22 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 03:07:22 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.242291678 +0000 UTC m=+0.081182648 container create 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.198636972 +0000 UTC m=+0.037528002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:22 np0005603663 systemd[1]: Started libpod-conmon-255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06.scope.
Jan 31 03:07:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.372198404 +0000 UTC m=+0.211089444 container init 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.38467868 +0000 UTC m=+0.223569630 container start 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.394530161 +0000 UTC m=+0.233421131 container attach 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:22 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test[87637]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 03:07:22 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test[87637]:                            [--no-systemd] [--no-tmpfs]
Jan 31 03:07:22 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test[87637]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 03:07:22 np0005603663 systemd[1]: libpod-255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06.scope: Deactivated successfully.
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.58864921 +0000 UTC m=+0.427540190 container died 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:07:22 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:22 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009-merged.mount: Deactivated successfully.
Jan 31 03:07:22 np0005603663 podman[87620]: 2026-01-31 08:07:22.744610589 +0000 UTC m=+0.583501529 container remove 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:22 np0005603663 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 03:07:22 np0005603663 systemd[1]: libpod-conmon-255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06.scope: Deactivated successfully.
Jan 31 03:07:22 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 03:07:23 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:23 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.419 iops: 7531.179 elapsed_sec: 0.398
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: log_channel(cluster) log [WRN] : OSD bench result of 7531.179157 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 0 waiting for initial osdmap
Jan 31 03:07:23 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0[85967]: 2026-01-31T08:07:23.083+0000 7fbc89349640 -1 osd.0 0 waiting for initial osdmap
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0 done with init, starting boot process
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0 start_boot
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 03:07:23 np0005603663 ceph-osd[87035]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0[85967]: 2026-01-31T08:07:23.140+0000 7fbc8414e640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 31 03:07:23 np0005603663 ceph-osd[85971]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 31 03:07:23 np0005603663 systemd[1]: Reloading.
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:23 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:07:23 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:07:23 np0005603663 systemd[1]: Starting Ceph osd.2 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 03:07:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 03:07:23 np0005603663 podman[87800]: 2026-01-31 08:07:23.75439324 +0000 UTC m=+0.047192288 container create 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:23 np0005603663 podman[87800]: 2026-01-31 08:07:23.728879012 +0000 UTC m=+0.021678050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:23 np0005603663 podman[87800]: 2026-01-31 08:07:23.893682224 +0000 UTC m=+0.186481252 container init 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:23 np0005603663 podman[87800]: 2026-01-31 08:07:23.901189228 +0000 UTC m=+0.193988236 container start 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:23 np0005603663 podman[87800]: 2026-01-31 08:07:23.908788575 +0000 UTC m=+0.201587573 container attach 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:24 np0005603663 ceph-osd[85971]: osd.0 9 tick checking mon for new map
Jan 31 03:07:24 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:24 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618] boot
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:24 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:24 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:24 np0005603663 ceph-osd[85971]: osd.0 11 state: booting -> active
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: OSD bench result of 7531.179157 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 03:07:24 np0005603663 lvm[87901]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:07:24 np0005603663 lvm[87899]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:24 np0005603663 lvm[87901]: VG ceph_vg1 finished
Jan 31 03:07:24 np0005603663 lvm[87899]: VG ceph_vg0 finished
Jan 31 03:07:24 np0005603663 lvm[87903]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:24 np0005603663 lvm[87903]: VG ceph_vg2 finished
Jan 31 03:07:24 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] creating mgr pool
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 31 03:07:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 bash[87800]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 03:07:24 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:24 np0005603663 bash[87800]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:25 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:25 np0005603663 bash[87800]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:25 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 03:07:25 np0005603663 bash[87800]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 03:07:25 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 03:07:25 np0005603663 bash[87800]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 03:07:25 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 03:07:25 np0005603663 bash[87800]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 03:07:25 np0005603663 systemd[1]: libpod-3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a.scope: Deactivated successfully.
Jan 31 03:07:25 np0005603663 podman[87800]: 2026-01-31 08:07:25.05580407 +0000 UTC m=+1.348603078 container died 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:07:25 np0005603663 systemd[1]: libpod-3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a.scope: Consumed 1.275s CPU time.
Jan 31 03:07:25 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:25 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618] boot
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 03:07:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc-merged.mount: Deactivated successfully.
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:25 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:25 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 31 03:07:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 03:07:25 np0005603663 ceph-osd[85971]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 03:07:25 np0005603663 ceph-osd[85971]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 03:07:25 np0005603663 ceph-osd[85971]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 03:07:25 np0005603663 podman[87800]: 2026-01-31 08:07:25.623928869 +0000 UTC m=+1.916727867 container remove 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:07:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 03:07:25 np0005603663 podman[88077]: 2026-01-31 08:07:25.879550542 +0000 UTC m=+0.092574012 container create b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:25 np0005603663 podman[88077]: 2026-01-31 08:07:25.81183444 +0000 UTC m=+0.024857890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:26 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:26 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:26 np0005603663 podman[88077]: 2026-01-31 08:07:26.331428895 +0000 UTC m=+0.544452365 container init b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:26 np0005603663 podman[88077]: 2026-01-31 08:07:26.340556745 +0000 UTC m=+0.553580175 container start b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: pidfile_write: ignore empty --pid-file
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 bash[88077]: b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b
Jan 31 03:07:26 np0005603663 systemd[1]: Started Ceph osd.2 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: load: jerasure load: lrc 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount shared_bdev_used = 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Git sha 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: DB SUMMARY
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: DB Session ID:  AMK3L2MLNV0PJCV1SAN1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                     Options.env: 0x5603a1dddea0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                Options.info_log: 0x5603a2e388a0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.write_buffer_manager: 0x5603a1e42b40
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.row_cache: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                              Options.wal_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.wal_compression: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_background_jobs: 4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Compression algorithms supported:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kZSTD supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e47fd480-0b39-49c2-8ccd-d36942261e3a
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846846746435, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846846747851, "job": 1, "event": "recovery_finished"}
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: freelist init
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: freelist _read_cfg
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs umount
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluefs mount shared_bdev_used = 27262976
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: RocksDB version: 7.9.2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Git sha 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: DB SUMMARY
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: DB Session ID:  AMK3L2MLNV0PJCV1SAN0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: CURRENT file:  CURRENT
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.error_if_exists: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.create_if_missing: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                     Options.env: 0x5603a1dddce0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                Options.info_log: 0x5603a2e38a20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                              Options.statistics: (nil)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.use_fsync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                              Options.db_log_dir: 
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.write_buffer_manager: 0x5603a1e42b40
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.unordered_write: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.row_cache: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                              Options.wal_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.two_write_queues: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.wal_compression: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.atomic_flush: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_background_jobs: 4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_background_compactions: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_subcompactions: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.max_open_files: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Compression algorithms supported:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kZSTD supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kXpressCompression supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kZlibCompression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e390c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e390c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e390c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5603a1de1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e47fd480-0b39-49c2-8ccd-d36942261e3a
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846846793688, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 03:07:26 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:26 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:26 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 03:07:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847043226, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846846, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e47fd480-0b39-49c2-8ccd-d36942261e3a", "db_session_id": "AMK3L2MLNV0PJCV1SAN0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:27 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:27 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847114210, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e47fd480-0b39-49c2-8ccd-d36942261e3a", "db_session_id": "AMK3L2MLNV0PJCV1SAN0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847320988, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e47fd480-0b39-49c2-8ccd-d36942261e3a", "db_session_id": "AMK3L2MLNV0PJCV1SAN0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847364872, "job": 1, "event": "recovery_finished"}
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5603a2e3a000
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: DB pointer 0x5603a2ff2000
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.8 total, 0.8 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usag
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: _get_class not permitted to load lua
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: _get_class not permitted to load sdk
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 load_pgs
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 load_pgs opened 0 pgs
Jan 31 03:07:27 np0005603663 ceph-osd[88096]: osd.2 0 log_to_monitors true
Jan 31 03:07:27 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2[88092]: 2026-01-31T08:07:27.605+0000 7f0dcf1a48c0 -1 osd.2 0 log_to_monitors true
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 31 03:07:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 03:07:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 03:07:27 np0005603663 podman[88608]: 2026-01-31 08:07:27.73008237 +0000 UTC m=+0.029746199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:27 np0005603663 podman[88608]: 2026-01-31 08:07:27.860394189 +0000 UTC m=+0.160057978 container create a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:07:27 np0005603663 systemd[1]: Started libpod-conmon-a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b.scope.
Jan 31 03:07:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 03:07:28 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:28 np0005603663 podman[88608]: 2026-01-31 08:07:28.10822918 +0000 UTC m=+0.407893009 container init a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:28 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:28 np0005603663 podman[88608]: 2026-01-31 08:07:28.118243976 +0000 UTC m=+0.417907735 container start a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:28 np0005603663 naughty_curie[88624]: 167 167
Jan 31 03:07:28 np0005603663 systemd[1]: libpod-a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b.scope: Deactivated successfully.
Jan 31 03:07:28 np0005603663 podman[88608]: 2026-01-31 08:07:28.147157519 +0000 UTC m=+0.446821318 container attach a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:28 np0005603663 podman[88608]: 2026-01-31 08:07:28.151051121 +0000 UTC m=+0.450714920 container died a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:07:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-932da1693f15d17382c6daf177625d408002300c7d9fc8e9e6cb5ddedb368f69-merged.mount: Deactivated successfully.
Jan 31 03:07:28 np0005603663 podman[88608]: 2026-01-31 08:07:28.288680497 +0000 UTC m=+0.588344246 container remove a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:07:28 np0005603663 systemd[1]: libpod-conmon-a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b.scope: Deactivated successfully.
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:28 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:28 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:28 np0005603663 podman[88650]: 2026-01-31 08:07:28.468703804 +0000 UTC m=+0.062868905 container create 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:28 np0005603663 systemd[1]: Started libpod-conmon-959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561.scope.
Jan 31 03:07:28 np0005603663 podman[88650]: 2026-01-31 08:07:28.443047652 +0000 UTC m=+0.037212793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:28 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:28 np0005603663 podman[88650]: 2026-01-31 08:07:28.58808035 +0000 UTC m=+0.182245451 container init 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:28 np0005603663 podman[88650]: 2026-01-31 08:07:28.593997589 +0000 UTC m=+0.188162650 container start 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:07:28 np0005603663 podman[88650]: 2026-01-31 08:07:28.604139088 +0000 UTC m=+0.198304139 container attach 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:28 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 03:07:28 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 42.254 iops: 10817.054 elapsed_sec: 0.277
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: log_channel(cluster) log [WRN] : OSD bench result of 10817.053791 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 0 waiting for initial osdmap
Jan 31 03:07:28 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1[87031]: 2026-01-31T08:07:28.930+0000 7f9bc86f9640 -1 osd.1 0 waiting for initial osdmap
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 set_numa_affinity not setting numa affinity
Jan 31 03:07:28 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1[87031]: 2026-01-31T08:07:28.948+0000 7f9bc34fe640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 03:07:28 np0005603663 ceph-osd[87035]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 03:07:29 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:29 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 03:07:29 np0005603663 lvm[88744]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:29 np0005603663 lvm[88744]: VG ceph_vg0 finished
Jan 31 03:07:29 np0005603663 lvm[88746]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:07:29 np0005603663 lvm[88746]: VG ceph_vg1 finished
Jan 31 03:07:29 np0005603663 lvm[88748]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:29 np0005603663 lvm[88748]: VG ceph_vg2 finished
Jan 31 03:07:29 np0005603663 sweet_feistel[88667]: {}
Jan 31 03:07:29 np0005603663 systemd[1]: libpod-959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561.scope: Deactivated successfully.
Jan 31 03:07:29 np0005603663 podman[88650]: 2026-01-31 08:07:29.28592468 +0000 UTC m=+0.880089771 container died 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:07:29 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd-merged.mount: Deactivated successfully.
Jan 31 03:07:29 np0005603663 podman[88650]: 2026-01-31 08:07:29.334514286 +0000 UTC m=+0.928679347 container remove 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:07:29 np0005603663 systemd[1]: libpod-conmon-959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561.scope: Deactivated successfully.
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0 done with init, starting boot process
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0 start_boot
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 03:07:29 np0005603663 ceph-osd[88096]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 31 03:07:29 np0005603663 ceph-osd[87035]: osd.1 15 state: booting -> active
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419] boot
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:29 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[12,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:29 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:29 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:29 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: OSD bench result of 10817.053791 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419] boot
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:30 np0005603663 podman[88880]: 2026-01-31 08:07:30.083808385 +0000 UTC m=+0.105877602 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:30 np0005603663 podman[88880]: 2026-01-31 08:07:30.214476283 +0000 UTC m=+0.236545490 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 03:07:30 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:30 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:30 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:30 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[12,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.274223508 +0000 UTC m=+0.075900696 container create ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.230569643 +0000 UTC m=+0.032246871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:31 np0005603663 systemd[1]: Started libpod-conmon-ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08.scope.
Jan 31 03:07:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.403830436 +0000 UTC m=+0.205507604 container init ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.412805752 +0000 UTC m=+0.214482930 container start ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:31 np0005603663 interesting_mcclintock[89105]: 167 167
Jan 31 03:07:31 np0005603663 systemd[1]: libpod-ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08.scope: Deactivated successfully.
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.438696501 +0000 UTC m=+0.240373729 container attach ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.439129413 +0000 UTC m=+0.240806601 container died ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-72ecb5e8812ba81a5be7f05c338e6ace22800a6235dfbe799b9907a5eb3b18ba-merged.mount: Deactivated successfully.
Jan 31 03:07:31 np0005603663 podman[89089]: 2026-01-31 08:07:31.580392234 +0000 UTC m=+0.382069422 container remove ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:07:31 np0005603663 systemd[1]: libpod-conmon-ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08.scope: Deactivated successfully.
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:07:31
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Some PGs (1.000000) are inactive; try again later
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 03:07:31 np0005603663 podman[89132]: 2026-01-31 08:07:31.769755545 +0000 UTC m=+0.059098497 container create 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:07:31 np0005603663 systemd[1]: Started libpod-conmon-765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f.scope.
Jan 31 03:07:31 np0005603663 podman[89132]: 2026-01-31 08:07:31.739329287 +0000 UTC m=+0.028672219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 03:07:31 np0005603663 podman[89132]: 2026-01-31 08:07:31.887442943 +0000 UTC m=+0.176785925 container init 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:07:31 np0005603663 podman[89132]: 2026-01-31 08:07:31.894886025 +0000 UTC m=+0.184228977 container start 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:31 np0005603663 ceph-mgr[75519]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 03:07:31 np0005603663 podman[89132]: 2026-01-31 08:07:31.911351495 +0000 UTC m=+0.200694457 container attach 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 03:07:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 03:07:32 np0005603663 python3[89192]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:32 np0005603663 podman[89196]: 2026-01-31 08:07:32.174548935 +0000 UTC m=+0.039401406 container create b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:07:32 np0005603663 systemd[1]: Started libpod-conmon-b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240.scope.
Jan 31 03:07:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:32 np0005603663 podman[89196]: 2026-01-31 08:07:32.154553804 +0000 UTC m=+0.019406255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:32 np0005603663 podman[89196]: 2026-01-31 08:07:32.273410495 +0000 UTC m=+0.138262976 container init b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:32 np0005603663 podman[89196]: 2026-01-31 08:07:32.282418112 +0000 UTC m=+0.147270543 container start b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:07:32 np0005603663 podman[89196]: 2026-01-31 08:07:32.292691745 +0000 UTC m=+0.157544176 container attach b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:32 np0005603663 determined_rubin[89158]: [
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:    {
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "available": false,
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "being_replaced": false,
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "ceph_device_lvm": false,
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "lsm_data": {},
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "lvs": [],
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "path": "/dev/sr0",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "rejected_reasons": [
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "Has a FileSystem",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "Insufficient space (<5GB)"
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        ],
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        "sys_api": {
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "actuators": null,
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "device_nodes": [
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:                "sr0"
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            ],
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "devname": "sr0",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "human_readable_size": "482.00 KB",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "id_bus": "ata",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "model": "QEMU DVD-ROM",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "nr_requests": "2",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "parent": "/dev/sr0",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "partitions": {},
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "path": "/dev/sr0",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "removable": "1",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "rev": "2.5+",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "ro": "0",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "rotational": "1",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "sas_address": "",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "sas_device_handle": "",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "scheduler_mode": "mq-deadline",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "sectors": 0,
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "sectorsize": "2048",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "size": 493568.0,
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "support_discard": "2048",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "type": "disk",
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:            "vendor": "QEMU"
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:        }
Jan 31 03:07:32 np0005603663 determined_rubin[89158]:    }
Jan 31 03:07:32 np0005603663 determined_rubin[89158]: ]
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.559 iops: 9103.206 elapsed_sec: 0.330
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: log_channel(cluster) log [WRN] : OSD bench result of 9103.205508 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 0 waiting for initial osdmap
Jan 31 03:07:32 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2[88092]: 2026-01-31T08:07:32.419+0000 7f0dcb126640 -1 osd.2 0 waiting for initial osdmap
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 03:07:32 np0005603663 systemd[1]: libpod-765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f.scope: Deactivated successfully.
Jan 31 03:07:32 np0005603663 podman[89132]: 2026-01-31 08:07:32.434830031 +0000 UTC m=+0.724172943 container died 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 03:07:32 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2[88092]: 2026-01-31T08:07:32.450+0000 7f0dc5f2b640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 set_numa_affinity not setting numa affinity
Jan 31 03:07:32 np0005603663 ceph-osd[88096]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 31 03:07:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb-merged.mount: Deactivated successfully.
Jan 31 03:07:32 np0005603663 podman[89132]: 2026-01-31 08:07:32.471686462 +0000 UTC m=+0.761029374 container remove 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:32 np0005603663 systemd[1]: libpod-conmon-765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f.scope: Deactivated successfully.
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 03:07:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395999579' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 03:07:32 np0005603663 bold_williams[89214]: 
Jan 31 03:07:32 np0005603663 bold_williams[89214]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":2,"osd_up_since":1769846849,"num_in_osds":3,"osd_in_since":1769846828,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":55218176,"bytes_avail":42886066176,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-31T08:06:11:330734+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T08:06:11.333031+0000","services":{}},"progress_events":{}}
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:07:32 np0005603663 systemd[1]: libpod-b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240.scope: Deactivated successfully.
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:32 np0005603663 podman[90086]: 2026-01-31 08:07:32.83191403 +0000 UTC m=+0.026403654 container died b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51-merged.mount: Deactivated successfully.
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.86802507 +0000 UTC m=+0.043795750 container create b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:32 np0005603663 podman[90086]: 2026-01-31 08:07:32.873574549 +0000 UTC m=+0.068064153 container remove b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:07:32 np0005603663 systemd[1]: libpod-conmon-b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240.scope: Deactivated successfully.
Jan 31 03:07:32 np0005603663 systemd[1]: Started libpod-conmon-b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff.scope.
Jan 31 03:07:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.926459968 +0000 UTC m=+0.102230648 container init b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.929948587 +0000 UTC m=+0.105719267 container start b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:07:32 np0005603663 affectionate_ardinghelli[90118]: 167 167
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.932596113 +0000 UTC m=+0.108366793 container attach b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:07:32 np0005603663 systemd[1]: libpod-b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff.scope: Deactivated successfully.
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.933487338 +0000 UTC m=+0.109258018 container died b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.848570775 +0000 UTC m=+0.024341475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-45c0b44f7f4cf3c08279c154f04477e43da43900f233b798181cc4c43150acc9-merged.mount: Deactivated successfully.
Jan 31 03:07:32 np0005603663 podman[90095]: 2026-01-31 08:07:32.967985562 +0000 UTC m=+0.143756242 container remove b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:07:32 np0005603663 systemd[1]: libpod-conmon-b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff.scope: Deactivated successfully.
Jan 31 03:07:33 np0005603663 podman[90143]: 2026-01-31 08:07:33.113927496 +0000 UTC m=+0.057262065 container create 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: OSD bench result of 9103.205508 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:07:33 np0005603663 systemd[1]: Started libpod-conmon-9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f.scope.
Jan 31 03:07:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 podman[90143]: 2026-01-31 08:07:33.088011387 +0000 UTC m=+0.031346036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:33 np0005603663 podman[90143]: 2026-01-31 08:07:33.193436725 +0000 UTC m=+0.136771334 container init 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:07:33 np0005603663 podman[90143]: 2026-01-31 08:07:33.203403939 +0000 UTC m=+0.146738518 container start 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:07:33 np0005603663 podman[90143]: 2026-01-31 08:07:33.208539396 +0000 UTC m=+0.151873985 container attach 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:33 np0005603663 python3[90184]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:33 np0005603663 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:33 np0005603663 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 03:07:33 np0005603663 podman[90191]: 2026-01-31 08:07:33.385245987 +0000 UTC m=+0.051542801 container create 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:07:33 np0005603663 systemd[1]: Started libpod-conmon-7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4.scope.
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780] boot
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 03:07:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:33 np0005603663 ceph-osd[88096]: osd.2 18 state: booting -> active
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f7c969dede8fb9aae8eac2dcf5b64fc3acf596f819914cdc5e8a50515359e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f7c969dede8fb9aae8eac2dcf5b64fc3acf596f819914cdc5e8a50515359e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:33 np0005603663 podman[90191]: 2026-01-31 08:07:33.442801649 +0000 UTC m=+0.109098483 container init 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:33 np0005603663 podman[90191]: 2026-01-31 08:07:33.44739444 +0000 UTC m=+0.113691254 container start 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:33 np0005603663 podman[90191]: 2026-01-31 08:07:33.450025936 +0000 UTC m=+0.116322780 container attach 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:33 np0005603663 podman[90191]: 2026-01-31 08:07:33.365426272 +0000 UTC m=+0.031723106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fqetdi(active, since 62s)
Jan 31 03:07:33 np0005603663 wonderful_saha[90180]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:07:33 np0005603663 wonderful_saha[90180]: --> All data devices are unavailable
Jan 31 03:07:33 np0005603663 systemd[1]: libpod-9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f.scope: Deactivated successfully.
Jan 31 03:07:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 03:07:33 np0005603663 podman[90245]: 2026-01-31 08:07:33.659275086 +0000 UTC m=+0.036680368 container died 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370-merged.mount: Deactivated successfully.
Jan 31 03:07:33 np0005603663 podman[90245]: 2026-01-31 08:07:33.711456504 +0000 UTC m=+0.088861786 container remove 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:07:33 np0005603663 systemd[1]: libpod-conmon-9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f.scope: Deactivated successfully.
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 03:07:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.098290291 +0000 UTC m=+0.043060209 container create db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:07:34 np0005603663 systemd[1]: Started libpod-conmon-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope.
Jan 31 03:07:34 np0005603663 ceph-mon[75227]: osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780] boot
Jan 31 03:07:34 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.157572303 +0000 UTC m=+0.102342271 container init db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.161542586 +0000 UTC m=+0.106312514 container start db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.16484668 +0000 UTC m=+0.109616618 container attach db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:34 np0005603663 wizardly_curran[90341]: 167 167
Jan 31 03:07:34 np0005603663 systemd[1]: libpod-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope: Deactivated successfully.
Jan 31 03:07:34 np0005603663 conmon[90341]: conmon db08e22ba650a538c61f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope/container/memory.events
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.166614091 +0000 UTC m=+0.111384009 container died db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.083852619 +0000 UTC m=+0.028622557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4df02e7f83ade91d3ffa9ef74d3d2f7840203b5b4399e8a6e36b6b81f131bc98-merged.mount: Deactivated successfully.
Jan 31 03:07:34 np0005603663 podman[90324]: 2026-01-31 08:07:34.202160855 +0000 UTC m=+0.146930773 container remove db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:07:34 np0005603663 systemd[1]: libpod-conmon-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope: Deactivated successfully.
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.328557991 +0000 UTC m=+0.043947325 container create 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:34 np0005603663 systemd[1]: Started libpod-conmon-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope.
Jan 31 03:07:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.302360304 +0000 UTC m=+0.017749638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.430083388 +0000 UTC m=+0.145472742 container init 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.436899682 +0000 UTC m=+0.152288986 container start 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.440214437 +0000 UTC m=+0.155603761 container attach 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:07:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 03:07:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 31 03:07:34 np0005603663 gallant_antonelli[90210]: pool 'vms' created
Jan 31 03:07:34 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 31 03:07:34 np0005603663 podman[90191]: 2026-01-31 08:07:34.568805086 +0000 UTC m=+1.235101910 container died 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:34 np0005603663 systemd[1]: libpod-7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4.scope: Deactivated successfully.
Jan 31 03:07:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-84f7c969dede8fb9aae8eac2dcf5b64fc3acf596f819914cdc5e8a50515359e5-merged.mount: Deactivated successfully.
Jan 31 03:07:34 np0005603663 podman[90191]: 2026-01-31 08:07:34.605180443 +0000 UTC m=+1.271477267 container remove 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:07:34 np0005603663 systemd[1]: libpod-conmon-7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4.scope: Deactivated successfully.
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]: {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:    "0": [
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:        {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "devices": [
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "/dev/loop3"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            ],
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_name": "ceph_lv0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_size": "21470642176",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "name": "ceph_lv0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "tags": {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.crush_device_class": "",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.encrypted": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osd_id": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.type": "block",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.vdo": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.with_tpm": "0"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            },
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "type": "block",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "vg_name": "ceph_vg0"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:        }
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:    ],
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:    "1": [
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:        {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "devices": [
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "/dev/loop4"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            ],
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_name": "ceph_lv1",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_size": "21470642176",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "name": "ceph_lv1",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "tags": {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.crush_device_class": "",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.encrypted": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osd_id": "1",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.type": "block",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.vdo": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.with_tpm": "0"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            },
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "type": "block",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "vg_name": "ceph_vg1"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:        }
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:    ],
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:    "2": [
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:        {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "devices": [
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "/dev/loop5"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            ],
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_name": "ceph_lv2",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_size": "21470642176",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "name": "ceph_lv2",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "tags": {
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.crush_device_class": "",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.encrypted": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osd_id": "2",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.type": "block",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.vdo": "0",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:                "ceph.with_tpm": "0"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            },
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "type": "block",
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:            "vg_name": "ceph_vg2"
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:        }
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]:    ]
Jan 31 03:07:34 np0005603663 dreamy_shockley[90380]: }
Jan 31 03:07:34 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:34 np0005603663 systemd[1]: libpod-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope: Deactivated successfully.
Jan 31 03:07:34 np0005603663 conmon[90380]: conmon 73059d3a5841549c761a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope/container/memory.events
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.768888414 +0000 UTC m=+0.484277758 container died 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:07:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84-merged.mount: Deactivated successfully.
Jan 31 03:07:34 np0005603663 podman[90364]: 2026-01-31 08:07:34.816346668 +0000 UTC m=+0.531736002 container remove 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 03:07:34 np0005603663 systemd[1]: libpod-conmon-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope: Deactivated successfully.
Jan 31 03:07:34 np0005603663 python3[90427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:34 np0005603663 podman[90468]: 2026-01-31 08:07:34.963327292 +0000 UTC m=+0.044514781 container create 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:34 np0005603663 systemd[1]: Started libpod-conmon-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope.
Jan 31 03:07:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2a86c27653a7f20fce58cb337f777283fc65922cffce57fabba5bd8d3476c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2a86c27653a7f20fce58cb337f777283fc65922cffce57fabba5bd8d3476c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:35 np0005603663 podman[90468]: 2026-01-31 08:07:34.945604356 +0000 UTC m=+0.026791885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:35 np0005603663 podman[90468]: 2026-01-31 08:07:35.050998873 +0000 UTC m=+0.132186412 container init 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:35 np0005603663 podman[90468]: 2026-01-31 08:07:35.055630895 +0000 UTC m=+0.136818394 container start 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:35 np0005603663 podman[90468]: 2026-01-31 08:07:35.059966649 +0000 UTC m=+0.141154208 container attach 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.192873501 +0000 UTC m=+0.039022584 container create f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:35 np0005603663 systemd[1]: Started libpod-conmon-f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30.scope.
Jan 31 03:07:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.246606104 +0000 UTC m=+0.092755207 container init f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.250595968 +0000 UTC m=+0.096745051 container start f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:07:35 np0005603663 elastic_kilby[90559]: 167 167
Jan 31 03:07:35 np0005603663 systemd[1]: libpod-f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30.scope: Deactivated successfully.
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.253631754 +0000 UTC m=+0.099780827 container attach f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.253902422 +0000 UTC m=+0.100051505 container died f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.172745347 +0000 UTC m=+0.018894480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:35 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5a6ba582449da7830ae0bc6b858444e19af027c3dbcfa05727e41de0a36c936e-merged.mount: Deactivated successfully.
Jan 31 03:07:35 np0005603663 podman[90524]: 2026-01-31 08:07:35.281383576 +0000 UTC m=+0.127532659 container remove f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:35 np0005603663 systemd[1]: libpod-conmon-f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30.scope: Deactivated successfully.
Jan 31 03:07:35 np0005603663 podman[90583]: 2026-01-31 08:07:35.425397064 +0000 UTC m=+0.059290853 container create aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:35 np0005603663 systemd[1]: Started libpod-conmon-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope.
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:35 np0005603663 podman[90583]: 2026-01-31 08:07:35.400847754 +0000 UTC m=+0.034741603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:35 np0005603663 podman[90583]: 2026-01-31 08:07:35.507488736 +0000 UTC m=+0.141382575 container init aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:35 np0005603663 podman[90583]: 2026-01-31 08:07:35.51639528 +0000 UTC m=+0.150289079 container start aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:35 np0005603663 podman[90583]: 2026-01-31 08:07:35.519731256 +0000 UTC m=+0.153625115 container attach aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 31 03:07:35 np0005603663 romantic_pare[90508]: pool 'volumes' created
Jan 31 03:07:35 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 31 03:07:35 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:35 np0005603663 systemd[1]: libpod-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope: Deactivated successfully.
Jan 31 03:07:35 np0005603663 conmon[90508]: conmon 4b5231c3a29176903f2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope/container/memory.events
Jan 31 03:07:35 np0005603663 podman[90468]: 2026-01-31 08:07:35.59490621 +0000 UTC m=+0.676093759 container died 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 03:07:35 np0005603663 systemd[1]: var-lib-containers-storage-overlay-01a2a86c27653a7f20fce58cb337f777283fc65922cffce57fabba5bd8d3476c-merged.mount: Deactivated successfully.
Jan 31 03:07:35 np0005603663 podman[90468]: 2026-01-31 08:07:35.63240268 +0000 UTC m=+0.713590169 container remove 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:35 np0005603663 systemd[1]: libpod-conmon-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope: Deactivated successfully.
Jan 31 03:07:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v43: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:35 np0005603663 python3[90657]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:35 np0005603663 podman[90698]: 2026-01-31 08:07:35.996769086 +0000 UTC m=+0.038856599 container create a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:07:36 np0005603663 systemd[1]: Started libpod-conmon-a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27.scope.
Jan 31 03:07:36 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:36 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d38ae5622bffecfa5717ccf21c11fdac9a2243cc5c14ddfbaec4ce45ed98a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:36 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d38ae5622bffecfa5717ccf21c11fdac9a2243cc5c14ddfbaec4ce45ed98a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:36 np0005603663 podman[90698]: 2026-01-31 08:07:36.051320762 +0000 UTC m=+0.093408315 container init a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:07:36 np0005603663 podman[90698]: 2026-01-31 08:07:36.055912274 +0000 UTC m=+0.097999767 container start a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:07:36 np0005603663 podman[90698]: 2026-01-31 08:07:36.059026782 +0000 UTC m=+0.101114395 container attach a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:07:36 np0005603663 podman[90698]: 2026-01-31 08:07:35.979496223 +0000 UTC m=+0.021583716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:36 np0005603663 lvm[90738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:07:36 np0005603663 lvm[90741]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:07:36 np0005603663 lvm[90741]: VG ceph_vg1 finished
Jan 31 03:07:36 np0005603663 lvm[90738]: VG ceph_vg0 finished
Jan 31 03:07:36 np0005603663 lvm[90745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:07:36 np0005603663 lvm[90745]: VG ceph_vg2 finished
Jan 31 03:07:36 np0005603663 amazing_volhard[90600]: {}
Jan 31 03:07:36 np0005603663 systemd[1]: libpod-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope: Deactivated successfully.
Jan 31 03:07:36 np0005603663 systemd[1]: libpod-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope: Consumed 1.048s CPU time.
Jan 31 03:07:36 np0005603663 podman[90583]: 2026-01-31 08:07:36.273770509 +0000 UTC m=+0.907664288 container died aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:07:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da-merged.mount: Deactivated successfully.
Jan 31 03:07:36 np0005603663 podman[90583]: 2026-01-31 08:07:36.30781212 +0000 UTC m=+0.941705899 container remove aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:36 np0005603663 systemd[1]: libpod-conmon-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope: Deactivated successfully.
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 31 03:07:36 np0005603663 sweet_villani[90728]: pool 'backups' created
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:36 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:36 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:36 np0005603663 systemd[1]: libpod-a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27.scope: Deactivated successfully.
Jan 31 03:07:36 np0005603663 podman[90698]: 2026-01-31 08:07:36.592550944 +0000 UTC m=+0.634638477 container died a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-85d38ae5622bffecfa5717ccf21c11fdac9a2243cc5c14ddfbaec4ce45ed98a7-merged.mount: Deactivated successfully.
Jan 31 03:07:36 np0005603663 podman[90698]: 2026-01-31 08:07:36.632505784 +0000 UTC m=+0.674593287 container remove a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:07:36 np0005603663 systemd[1]: libpod-conmon-a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27.scope: Deactivated successfully.
Jan 31 03:07:36 np0005603663 python3[90843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:36 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:36 np0005603663 podman[90844]: 2026-01-31 08:07:36.984512528 +0000 UTC m=+0.056633767 container create be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:37 np0005603663 systemd[1]: Started libpod-conmon-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope.
Jan 31 03:07:37 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:37 np0005603663 podman[90844]: 2026-01-31 08:07:36.961552872 +0000 UTC m=+0.033674161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5303a2fd46dc0775300aa94f03d9257ce838f2f114e4f26ee81a06e589bfbf92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5303a2fd46dc0775300aa94f03d9257ce838f2f114e4f26ee81a06e589bfbf92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:37 np0005603663 podman[90844]: 2026-01-31 08:07:37.071414257 +0000 UTC m=+0.143535566 container init be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:07:37 np0005603663 podman[90844]: 2026-01-31 08:07:37.0802973 +0000 UTC m=+0.152418549 container start be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:37 np0005603663 podman[90844]: 2026-01-31 08:07:37.083568584 +0000 UTC m=+0.155689833 container attach be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 31 03:07:37 np0005603663 reverent_golick[90860]: pool 'images' created
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:37 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:37 np0005603663 systemd[1]: libpod-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope: Deactivated successfully.
Jan 31 03:07:37 np0005603663 conmon[90860]: conmon be0eb96bfc2e22737588 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope/container/memory.events
Jan 31 03:07:37 np0005603663 podman[90844]: 2026-01-31 08:07:37.610195559 +0000 UTC m=+0.682316768 container died be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:07:37 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5303a2fd46dc0775300aa94f03d9257ce838f2f114e4f26ee81a06e589bfbf92-merged.mount: Deactivated successfully.
Jan 31 03:07:37 np0005603663 podman[90844]: 2026-01-31 08:07:37.64774065 +0000 UTC m=+0.719861859 container remove be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:07:37 np0005603663 systemd[1]: libpod-conmon-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope: Deactivated successfully.
Jan 31 03:07:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v46: 5 pgs: 1 active+clean, 4 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:37 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:37 np0005603663 python3[90924]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:38.022843152 +0000 UTC m=+0.044749627 container create b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:38 np0005603663 systemd[1]: Started libpod-conmon-b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339.scope.
Jan 31 03:07:38 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:38 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29180b75fc61fb38f24daa57d92a9c871b099cf9e2c031aeaa3fba70cd78b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:38 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29180b75fc61fb38f24daa57d92a9c871b099cf9e2c031aeaa3fba70cd78b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:37.999821626 +0000 UTC m=+0.021728161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:38.112786739 +0000 UTC m=+0.134693264 container init b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:38.117701899 +0000 UTC m=+0.139608374 container start b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:38.12791713 +0000 UTC m=+0.149823655 container attach b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 31 03:07:38 np0005603663 kind_perlman[90940]: pool 'cephfs.cephfs.meta' created
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:38 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:38 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 31 03:07:38 np0005603663 systemd[1]: libpod-b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339.scope: Deactivated successfully.
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:38.635166393 +0000 UTC m=+0.657072838 container died b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:38 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ef29180b75fc61fb38f24daa57d92a9c871b099cf9e2c031aeaa3fba70cd78b1-merged.mount: Deactivated successfully.
Jan 31 03:07:38 np0005603663 podman[90925]: 2026-01-31 08:07:38.675210075 +0000 UTC m=+0.697116510 container remove b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:38 np0005603663 systemd[1]: libpod-conmon-b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339.scope: Deactivated successfully.
Jan 31 03:07:38 np0005603663 python3[91007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:39 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:39.01294549 +0000 UTC m=+0.044787589 container create c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:07:39 np0005603663 systemd[1]: Started libpod-conmon-c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e.scope.
Jan 31 03:07:39 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:39 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8b0b9891eaf0206cbbb892d50afdae6952bc975419a5238e1036e5597118e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:39 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8b0b9891eaf0206cbbb892d50afdae6952bc975419a5238e1036e5597118e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:38.98982042 +0000 UTC m=+0.021662569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:39.090011049 +0000 UTC m=+0.121853128 container init c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:39.094887348 +0000 UTC m=+0.126729417 container start c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:39.098395538 +0000 UTC m=+0.130237627 container attach c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 31 03:07:39 np0005603663 kind_khayyam[91023]: pool 'cephfs.cephfs.data' created
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 31 03:07:39 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:07:39 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:39 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 03:07:39 np0005603663 systemd[1]: libpod-c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e.scope: Deactivated successfully.
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:39.642948905 +0000 UTC m=+0.674791004 container died c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:07:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v49: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:39 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4d8b0b9891eaf0206cbbb892d50afdae6952bc975419a5238e1036e5597118e6-merged.mount: Deactivated successfully.
Jan 31 03:07:39 np0005603663 podman[91008]: 2026-01-31 08:07:39.684475179 +0000 UTC m=+0.716317278 container remove c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:39 np0005603663 systemd[1]: libpod-conmon-c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e.scope: Deactivated successfully.
Jan 31 03:07:40 np0005603663 python3[91087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:40 np0005603663 podman[91088]: 2026-01-31 08:07:40.077187204 +0000 UTC m=+0.055437103 container create 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:40 np0005603663 systemd[1]: Started libpod-conmon-5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4.scope.
Jan 31 03:07:40 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9d0ddfa926f603e0e94d83f9729ff7bf40ae657d704c2f79e941760270a5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9d0ddfa926f603e0e94d83f9729ff7bf40ae657d704c2f79e941760270a5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:40 np0005603663 podman[91088]: 2026-01-31 08:07:40.149437235 +0000 UTC m=+0.127687224 container init 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:07:40 np0005603663 podman[91088]: 2026-01-31 08:07:40.061201038 +0000 UTC m=+0.039451037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:40 np0005603663 podman[91088]: 2026-01-31 08:07:40.158155824 +0000 UTC m=+0.136405723 container start 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:40 np0005603663 podman[91088]: 2026-01-31 08:07:40.16116584 +0000 UTC m=+0.139415769 container attach 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:07:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 03:07:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 31 03:07:40 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 31 03:07:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:07:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 31 03:07:40 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 03:07:40 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 03:07:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 03:07:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 03:07:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 31 03:07:41 np0005603663 recursing_heyrovsky[91104]: enabled application 'rbd' on pool 'vms'
Jan 31 03:07:41 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 31 03:07:41 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 03:07:41 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 03:07:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:41 np0005603663 systemd[1]: libpod-5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4.scope: Deactivated successfully.
Jan 31 03:07:41 np0005603663 podman[91088]: 2026-01-31 08:07:41.673456047 +0000 UTC m=+1.651706026 container died 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:07:41 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d9c9d0ddfa926f603e0e94d83f9729ff7bf40ae657d704c2f79e941760270a5b-merged.mount: Deactivated successfully.
Jan 31 03:07:41 np0005603663 podman[91088]: 2026-01-31 08:07:41.750881486 +0000 UTC m=+1.729131415 container remove 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:41 np0005603663 systemd[1]: libpod-conmon-5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4.scope: Deactivated successfully.
Jan 31 03:07:42 np0005603663 python3[91168]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:42 np0005603663 podman[91169]: 2026-01-31 08:07:42.14502338 +0000 UTC m=+0.018803907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:42 np0005603663 podman[91169]: 2026-01-31 08:07:42.375394442 +0000 UTC m=+0.249174949 container create 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:42 np0005603663 systemd[1]: Started libpod-conmon-9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677.scope.
Jan 31 03:07:42 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:42 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f290810d7801b0a5260eebd810d89f58f89777ded1a90753d8d5e20e435ad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:42 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f290810d7801b0a5260eebd810d89f58f89777ded1a90753d8d5e20e435ad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:42 np0005603663 podman[91169]: 2026-01-31 08:07:42.438546403 +0000 UTC m=+0.312326940 container init 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:07:42 np0005603663 podman[91169]: 2026-01-31 08:07:42.444343599 +0000 UTC m=+0.318124116 container start 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:07:42 np0005603663 podman[91169]: 2026-01-31 08:07:42.448064864 +0000 UTC m=+0.321845361 container attach 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 31 03:07:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 03:07:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 03:07:43 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 03:07:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 03:07:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 31 03:07:43 np0005603663 quirky_herschel[91184]: enabled application 'rbd' on pool 'volumes'
Jan 31 03:07:43 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 31 03:07:43 np0005603663 systemd[1]: libpod-9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677.scope: Deactivated successfully.
Jan 31 03:07:43 np0005603663 podman[91169]: 2026-01-31 08:07:43.736430528 +0000 UTC m=+1.610211085 container died 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:07:43 np0005603663 systemd[1]: var-lib-containers-storage-overlay-79f290810d7801b0a5260eebd810d89f58f89777ded1a90753d8d5e20e435ad3-merged.mount: Deactivated successfully.
Jan 31 03:07:43 np0005603663 podman[91169]: 2026-01-31 08:07:43.78102884 +0000 UTC m=+1.654809387 container remove 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:43 np0005603663 systemd[1]: libpod-conmon-9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677.scope: Deactivated successfully.
Jan 31 03:07:44 np0005603663 python3[91245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.085961639 +0000 UTC m=+0.041371771 container create 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:44 np0005603663 systemd[1]: Started libpod-conmon-80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9.scope.
Jan 31 03:07:44 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:44 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1d0d61216a4645ee71a94b0af8fdd8439c65973f5e3e19751b53cd72333940/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:44 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1d0d61216a4645ee71a94b0af8fdd8439c65973f5e3e19751b53cd72333940/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.139692822 +0000 UTC m=+0.095102954 container init 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.14454684 +0000 UTC m=+0.099956972 container start 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.147800673 +0000 UTC m=+0.103210805 container attach 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.063647692 +0000 UTC m=+0.019057854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 31 03:07:44 np0005603663 beautiful_chaplygin[91261]: enabled application 'rbd' on pool 'backups'
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 03:07:44 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 03:07:44 np0005603663 systemd[1]: libpod-80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9.scope: Deactivated successfully.
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.740272676 +0000 UTC m=+0.695682808 container died 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:44 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ed1d0d61216a4645ee71a94b0af8fdd8439c65973f5e3e19751b53cd72333940-merged.mount: Deactivated successfully.
Jan 31 03:07:44 np0005603663 podman[91246]: 2026-01-31 08:07:44.779129324 +0000 UTC m=+0.734539456 container remove 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:44 np0005603663 systemd[1]: libpod-conmon-80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9.scope: Deactivated successfully.
Jan 31 03:07:45 np0005603663 python3[91323]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.087147201 +0000 UTC m=+0.053537758 container create 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:45 np0005603663 systemd[1]: Started libpod-conmon-1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f.scope.
Jan 31 03:07:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b865e25c039a93fc8b12375fafe506e4e2fe16baacf7f01c374bde12108c39e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b865e25c039a93fc8b12375fafe506e4e2fe16baacf7f01c374bde12108c39e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.1565222 +0000 UTC m=+0.122912727 container init 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.062714724 +0000 UTC m=+0.029105271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.161123332 +0000 UTC m=+0.127513849 container start 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.16458336 +0000 UTC m=+0.130973907 container attach 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 03:07:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 31 03:07:45 np0005603663 infallible_kare[91338]: enabled application 'rbd' on pool 'images'
Jan 31 03:07:45 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 31 03:07:45 np0005603663 systemd[1]: libpod-1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f.scope: Deactivated successfully.
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.761566191 +0000 UTC m=+0.727956708 container died 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:45 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9b865e25c039a93fc8b12375fafe506e4e2fe16baacf7f01c374bde12108c39e-merged.mount: Deactivated successfully.
Jan 31 03:07:45 np0005603663 podman[91324]: 2026-01-31 08:07:45.802045966 +0000 UTC m=+0.768436513 container remove 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:07:45 np0005603663 systemd[1]: libpod-conmon-1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f.scope: Deactivated successfully.
Jan 31 03:07:46 np0005603663 python3[91402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.131172004 +0000 UTC m=+0.039806117 container create 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:07:46 np0005603663 systemd[1]: Started libpod-conmon-11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5.scope.
Jan 31 03:07:46 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6e56cd481e85f8cb94db65c0481780ffb68521b84973a09a256f974eda8306/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6e56cd481e85f8cb94db65c0481780ffb68521b84973a09a256f974eda8306/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.188610163 +0000 UTC m=+0.097244276 container init 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.193534003 +0000 UTC m=+0.102168156 container start 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.196635762 +0000 UTC m=+0.105269895 container attach 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.114396715 +0000 UTC m=+0.023030918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 31 03:07:46 np0005603663 recursing_moser[91418]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 03:07:46 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 31 03:07:46 np0005603663 systemd[1]: libpod-11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5.scope: Deactivated successfully.
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.77601539 +0000 UTC m=+0.684649503 container died 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:46 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0c6e56cd481e85f8cb94db65c0481780ffb68521b84973a09a256f974eda8306-merged.mount: Deactivated successfully.
Jan 31 03:07:46 np0005603663 podman[91403]: 2026-01-31 08:07:46.810084552 +0000 UTC m=+0.718718695 container remove 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:46 np0005603663 systemd[1]: libpod-conmon-11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5.scope: Deactivated successfully.
Jan 31 03:07:47 np0005603663 python3[91479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.132654294 +0000 UTC m=+0.051545862 container create d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:47 np0005603663 systemd[1]: Started libpod-conmon-d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d.scope.
Jan 31 03:07:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a61f92ec24b54cd0741dd5027b670206c2237fc0d267e866517dc39b216ee9f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a61f92ec24b54cd0741dd5027b670206c2237fc0d267e866517dc39b216ee9f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.191888984 +0000 UTC m=+0.110780832 container init d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.195925309 +0000 UTC m=+0.114816907 container start d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.199371437 +0000 UTC m=+0.118263035 container attach d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.11287418 +0000 UTC m=+0.031765768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 03:07:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 31 03:07:47 np0005603663 funny_hertz[91495]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 03:07:47 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 31 03:07:47 np0005603663 systemd[1]: libpod-d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d.scope: Deactivated successfully.
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.799554899 +0000 UTC m=+0.718446507 container died d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:07:47 np0005603663 systemd[1]: var-lib-containers-storage-overlay-1a61f92ec24b54cd0741dd5027b670206c2237fc0d267e866517dc39b216ee9f-merged.mount: Deactivated successfully.
Jan 31 03:07:47 np0005603663 podman[91480]: 2026-01-31 08:07:47.845828239 +0000 UTC m=+0.764719837 container remove d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:47 np0005603663 systemd[1]: libpod-conmon-d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d.scope: Deactivated successfully.
Jan 31 03:07:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:48 np0005603663 python3[91606]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:07:49 np0005603663 python3[91677]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846868.463966-36775-35686451693142/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:07:49 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 03:07:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:49 np0005603663 python3[91779]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:07:50 np0005603663 python3[91854]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846869.3998997-36789-143413542952391/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=061161ae8da8cd523119e3ac10ce6756b3664db4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:07:50 np0005603663 python3[91904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:50 np0005603663 podman[91905]: 2026-01-31 08:07:50.600892573 +0000 UTC m=+0.034962918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:50 np0005603663 podman[91905]: 2026-01-31 08:07:50.751060677 +0000 UTC m=+0.185130932 container create 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:07:50 np0005603663 systemd[1]: Started libpod-conmon-0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c.scope.
Jan 31 03:07:50 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:50 np0005603663 podman[91905]: 2026-01-31 08:07:50.878302437 +0000 UTC m=+0.312372722 container init 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:50 np0005603663 podman[91905]: 2026-01-31 08:07:50.883798664 +0000 UTC m=+0.317868929 container start 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:07:50 np0005603663 podman[91905]: 2026-01-31 08:07:50.96571508 +0000 UTC m=+0.399785445 container attach 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 03:07:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 03:07:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 03:07:51 np0005603663 reverent_mirzakhani[91921]: 
Jan 31 03:07:51 np0005603663 reverent_mirzakhani[91921]: [global]
Jan 31 03:07:51 np0005603663 reverent_mirzakhani[91921]: #011fsid = 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 03:07:51 np0005603663 reverent_mirzakhani[91921]: #011mon_host = 192.168.122.100
Jan 31 03:07:51 np0005603663 reverent_mirzakhani[91921]: #011rgw_keystone_api_version = 3
Jan 31 03:07:51 np0005603663 systemd[1]: libpod-0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c.scope: Deactivated successfully.
Jan 31 03:07:51 np0005603663 podman[91947]: 2026-01-31 08:07:51.471385776 +0000 UTC m=+0.031927002 container died 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:07:51 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 03:07:51 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 03:07:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:51 np0005603663 systemd[1]: var-lib-containers-storage-overlay-70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d-merged.mount: Deactivated successfully.
Jan 31 03:07:51 np0005603663 podman[91947]: 2026-01-31 08:07:51.978965396 +0000 UTC m=+0.539506592 container remove 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:07:51 np0005603663 systemd[1]: libpod-conmon-0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c.scope: Deactivated successfully.
Jan 31 03:07:52 np0005603663 python3[92074]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:52 np0005603663 podman[92077]: 2026-01-31 08:07:52.482532882 +0000 UTC m=+0.311709993 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:07:52 np0005603663 podman[92091]: 2026-01-31 08:07:52.555858784 +0000 UTC m=+0.242084407 container create b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:52 np0005603663 podman[92091]: 2026-01-31 08:07:52.474669627 +0000 UTC m=+0.160895260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:52 np0005603663 podman[92077]: 2026-01-31 08:07:52.627914009 +0000 UTC m=+0.457091160 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:52 np0005603663 systemd[1]: Started libpod-conmon-b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a.scope.
Jan 31 03:07:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:52 np0005603663 podman[92091]: 2026-01-31 08:07:52.840934236 +0000 UTC m=+0.527159919 container init b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:52 np0005603663 podman[92091]: 2026-01-31 08:07:52.845649061 +0000 UTC m=+0.531874684 container start b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:07:52 np0005603663 podman[92091]: 2026-01-31 08:07:52.954366802 +0000 UTC m=+0.640592405 container attach b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:07:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 31 03:07:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:07:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/82400921' entity='client.admin' 
Jan 31 03:07:53 np0005603663 stoic_mestorf[92124]: set ssl_option
Jan 31 03:07:53 np0005603663 systemd[1]: libpod-b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a.scope: Deactivated successfully.
Jan 31 03:07:53 np0005603663 podman[92091]: 2026-01-31 08:07:53.631992562 +0000 UTC m=+1.318218145 container died b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:54 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647-merged.mount: Deactivated successfully.
Jan 31 03:07:54 np0005603663 podman[92091]: 2026-01-31 08:07:54.401783993 +0000 UTC m=+2.088009606 container remove b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:54 np0005603663 systemd[1]: libpod-conmon-b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a.scope: Deactivated successfully.
Jan 31 03:07:54 np0005603663 python3[92370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/82400921' entity='client.admin' 
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:54 np0005603663 podman[92386]: 2026-01-31 08:07:54.83214765 +0000 UTC m=+0.094979201 container create 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:07:54 np0005603663 podman[92386]: 2026-01-31 08:07:54.75819475 +0000 UTC m=+0.021026311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:54 np0005603663 systemd[1]: Started libpod-conmon-54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43.scope.
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:07:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:07:55 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:55 np0005603663 podman[92386]: 2026-01-31 08:07:55.164622615 +0000 UTC m=+0.427454226 container init 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:55 np0005603663 podman[92386]: 2026-01-31 08:07:55.174468446 +0000 UTC m=+0.437299997 container start 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:55 np0005603663 podman[92386]: 2026-01-31 08:07:55.304092363 +0000 UTC m=+0.566923924 container attach 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:55 np0005603663 podman[92489]: 2026-01-31 08:07:55.483948194 +0000 UTC m=+0.100360134 container create 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:07:55 np0005603663 podman[92489]: 2026-01-31 08:07:55.417751116 +0000 UTC m=+0.034163106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:55 np0005603663 systemd[1]: Started libpod-conmon-037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696.scope.
Jan 31 03:07:55 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:55 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:07:55 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 31 03:07:55 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 03:07:55 np0005603663 podman[92489]: 2026-01-31 08:07:55.632662697 +0000 UTC m=+0.249074687 container init 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:07:55 np0005603663 podman[92489]: 2026-01-31 08:07:55.639034359 +0000 UTC m=+0.255446299 container start 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:55 np0005603663 stoic_leavitt[92505]: 167 167
Jan 31 03:07:55 np0005603663 systemd[1]: libpod-037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696.scope: Deactivated successfully.
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:55 np0005603663 friendly_torvalds[92404]: Scheduled rgw.rgw update...
Jan 31 03:07:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:55 np0005603663 systemd[1]: libpod-54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43.scope: Deactivated successfully.
Jan 31 03:07:55 np0005603663 podman[92489]: 2026-01-31 08:07:55.688975253 +0000 UTC m=+0.305387173 container attach 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:07:55 np0005603663 podman[92489]: 2026-01-31 08:07:55.689734135 +0000 UTC m=+0.306146065 container died 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:07:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:55 np0005603663 systemd[1]: var-lib-containers-storage-overlay-801d2c5bbed6f422da49c43ca80226fb8263c201cf51e2ae6475f4d4e8db2dab-merged.mount: Deactivated successfully.
Jan 31 03:07:56 np0005603663 podman[92489]: 2026-01-31 08:07:56.084217919 +0000 UTC m=+0.700629859 container remove 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:07:56 np0005603663 podman[92386]: 2026-01-31 08:07:56.104393184 +0000 UTC m=+1.367224745 container died 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:56 np0005603663 systemd[1]: libpod-conmon-037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696.scope: Deactivated successfully.
Jan 31 03:07:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29-merged.mount: Deactivated successfully.
Jan 31 03:07:56 np0005603663 podman[92541]: 2026-01-31 08:07:56.22029351 +0000 UTC m=+0.024073947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:56 np0005603663 podman[92386]: 2026-01-31 08:07:56.411034672 +0000 UTC m=+1.673866203 container remove 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:56 np0005603663 podman[92541]: 2026-01-31 08:07:56.495541533 +0000 UTC m=+0.299321990 container create 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:56 np0005603663 systemd[1]: Started libpod-conmon-832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b.scope.
Jan 31 03:07:56 np0005603663 systemd[1]: libpod-conmon-54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43.scope: Deactivated successfully.
Jan 31 03:07:56 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:56 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:56 np0005603663 podman[92541]: 2026-01-31 08:07:56.655754953 +0000 UTC m=+0.459535460 container init 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:56 np0005603663 podman[92541]: 2026-01-31 08:07:56.663996458 +0000 UTC m=+0.467776915 container start 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:56 np0005603663 podman[92541]: 2026-01-31 08:07:56.679911912 +0000 UTC m=+0.483692369 container attach 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:56 np0005603663 ceph-mon[75227]: Saving service rgw.rgw spec with placement compute-0
Jan 31 03:07:57 np0005603663 compassionate_ritchie[92559]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:07:57 np0005603663 compassionate_ritchie[92559]: --> All data devices are unavailable
Jan 31 03:07:57 np0005603663 systemd[1]: libpod-832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b.scope: Deactivated successfully.
Jan 31 03:07:57 np0005603663 podman[92541]: 2026-01-31 08:07:57.192434982 +0000 UTC m=+0.996215439 container died 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:07:57 np0005603663 systemd[1]: var-lib-containers-storage-overlay-45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7-merged.mount: Deactivated successfully.
Jan 31 03:07:57 np0005603663 podman[92541]: 2026-01-31 08:07:57.246610448 +0000 UTC m=+1.050390895 container remove 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:07:57 np0005603663 systemd[1]: libpod-conmon-832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b.scope: Deactivated successfully.
Jan 31 03:07:57 np0005603663 python3[92651]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:07:57 np0005603663 python3[92786]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846876.9895024-36830-98997909560458/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:07:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.716153653 +0000 UTC m=+0.048374781 container create 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:07:57 np0005603663 systemd[1]: Started libpod-conmon-30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f.scope.
Jan 31 03:07:57 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.691489609 +0000 UTC m=+0.023710817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.792429329 +0000 UTC m=+0.124650497 container init 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.797846943 +0000 UTC m=+0.130068091 container start 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:57 np0005603663 unruffled_lamport[92839]: 167 167
Jan 31 03:07:57 np0005603663 systemd[1]: libpod-30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f.scope: Deactivated successfully.
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.80158253 +0000 UTC m=+0.133803668 container attach 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.801987221 +0000 UTC m=+0.134208339 container died 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:57 np0005603663 systemd[1]: var-lib-containers-storage-overlay-96b3efe8852b5afe70540dae4a008de41d1f4b8085947019782c60acbc47e182-merged.mount: Deactivated successfully.
Jan 31 03:07:57 np0005603663 podman[92806]: 2026-01-31 08:07:57.838096351 +0000 UTC m=+0.170317469 container remove 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 03:07:57 np0005603663 systemd[1]: libpod-conmon-30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f.scope: Deactivated successfully.
Jan 31 03:07:58 np0005603663 podman[92887]: 2026-01-31 08:07:58.047012701 +0000 UTC m=+0.109878735 container create 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:07:58 np0005603663 podman[92887]: 2026-01-31 08:07:57.959186726 +0000 UTC m=+0.022052850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:07:58 np0005603663 python3[92901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:07:58 np0005603663 systemd[1]: Started libpod-conmon-31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc.scope.
Jan 31 03:07:58 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 podman[92904]: 2026-01-31 08:07:58.227239083 +0000 UTC m=+0.094629261 container create 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:07:58 np0005603663 podman[92904]: 2026-01-31 08:07:58.155572128 +0000 UTC m=+0.022962286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:07:58 np0005603663 systemd[1]: Started libpod-conmon-123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631.scope.
Jan 31 03:07:58 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:07:58 np0005603663 podman[92887]: 2026-01-31 08:07:58.358119306 +0000 UTC m=+0.420985430 container init 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:07:58 np0005603663 podman[92887]: 2026-01-31 08:07:58.363945072 +0000 UTC m=+0.426811136 container start 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:58 np0005603663 podman[92887]: 2026-01-31 08:07:58.440550488 +0000 UTC m=+0.503416552 container attach 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:07:58 np0005603663 podman[92904]: 2026-01-31 08:07:58.54335245 +0000 UTC m=+0.410742608 container init 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:58 np0005603663 podman[92904]: 2026-01-31 08:07:58.552637055 +0000 UTC m=+0.420027203 container start 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:07:58 np0005603663 nervous_morse[92919]: {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:    "0": [
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:        {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "devices": [
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "/dev/loop3"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            ],
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_name": "ceph_lv0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_size": "21470642176",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "name": "ceph_lv0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "tags": {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.crush_device_class": "",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.encrypted": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osd_id": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.type": "block",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.vdo": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.with_tpm": "0"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            },
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "type": "block",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "vg_name": "ceph_vg0"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:        }
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:    ],
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:    "1": [
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:        {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "devices": [
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "/dev/loop4"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            ],
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_name": "ceph_lv1",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_size": "21470642176",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "name": "ceph_lv1",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "tags": {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.crush_device_class": "",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.encrypted": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osd_id": "1",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.type": "block",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.vdo": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.with_tpm": "0"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            },
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "type": "block",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "vg_name": "ceph_vg1"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:        }
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:    ],
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:    "2": [
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:        {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "devices": [
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "/dev/loop5"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            ],
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_name": "ceph_lv2",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_size": "21470642176",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "name": "ceph_lv2",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "tags": {
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.crush_device_class": "",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.encrypted": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.objectstore": "bluestore",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osd_id": "2",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.type": "block",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.vdo": "0",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:                "ceph.with_tpm": "0"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            },
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "type": "block",
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:            "vg_name": "ceph_vg2"
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:        }
Jan 31 03:07:58 np0005603663 nervous_morse[92919]:    ]
Jan 31 03:07:58 np0005603663 nervous_morse[92919]: }
Jan 31 03:07:58 np0005603663 systemd[1]: libpod-31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc.scope: Deactivated successfully.
Jan 31 03:07:58 np0005603663 podman[92904]: 2026-01-31 08:07:58.678610849 +0000 UTC m=+0.546001077 container attach 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:07:58 np0005603663 podman[92887]: 2026-01-31 08:07:58.679587907 +0000 UTC m=+0.742453941 container died 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:07:58 np0005603663 systemd[1]: var-lib-containers-storage-overlay-12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0-merged.mount: Deactivated successfully.
Jan 31 03:07:59 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:07:59 np0005603663 ceph-mgr[75519]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 03:07:59 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0[75223]: 2026-01-31T08:07:59.021+0000 7f922797a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e2 new map
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-31T08:07:59:022862+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T08:07:59.022433+0000#012modified#0112026-01-31T08:07:59.022433+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 03:07:59 np0005603663 podman[92887]: 2026-01-31 08:07:59.39546727 +0000 UTC m=+1.458333344 container remove 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:59 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 03:07:59 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 03:07:59 np0005603663 systemd[1]: libpod-conmon-31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc.scope: Deactivated successfully.
Jan 31 03:07:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:07:59 np0005603663 ceph-mgr[75519]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 03:07:59 np0005603663 systemd[1]: libpod-123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631.scope: Deactivated successfully.
Jan 31 03:07:59 np0005603663 podman[92904]: 2026-01-31 08:07:59.58303097 +0000 UTC m=+1.450421208 container died 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:07:59 np0005603663 systemd[1]: var-lib-containers-storage-overlay-20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f-merged.mount: Deactivated successfully.
Jan 31 03:07:59 np0005603663 podman[92904]: 2026-01-31 08:07:59.86380159 +0000 UTC m=+1.731191768 container remove 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:59 np0005603663 systemd[1]: libpod-conmon-123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631.scope: Deactivated successfully.
Jan 31 03:07:59 np0005603663 podman[93045]: 2026-01-31 08:07:59.997042501 +0000 UTC m=+0.098133671 container create cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:00 np0005603663 podman[93045]: 2026-01-31 08:07:59.929437232 +0000 UTC m=+0.030528452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:00 np0005603663 systemd[1]: Started libpod-conmon-cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013.scope.
Jan 31 03:08:00 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:00 np0005603663 python3[93084]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:00 np0005603663 podman[93045]: 2026-01-31 08:08:00.239414775 +0000 UTC m=+0.340505935 container init cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:00 np0005603663 podman[93045]: 2026-01-31 08:08:00.245097697 +0000 UTC m=+0.346188837 container start cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:00 np0005603663 gracious_lalande[93087]: 167 167
Jan 31 03:08:00 np0005603663 systemd[1]: libpod-cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013.scope: Deactivated successfully.
Jan 31 03:08:00 np0005603663 podman[93045]: 2026-01-31 08:08:00.314884178 +0000 UTC m=+0.415975428 container attach cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:08:00 np0005603663 podman[93045]: 2026-01-31 08:08:00.315519947 +0000 UTC m=+0.416611117 container died cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:08:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 03:08:00 np0005603663 ceph-mon[75227]: Saving service mds.cephfs spec with placement compute-0
Jan 31 03:08:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:00 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8a63cf6bd11f30f87e41c697eb8c64cd3b1400600c42b13b3c022e1c10b3663b-merged.mount: Deactivated successfully.
Jan 31 03:08:00 np0005603663 podman[93045]: 2026-01-31 08:08:00.952022803 +0000 UTC m=+1.053113943 container remove cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:00 np0005603663 systemd[1]: libpod-conmon-cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013.scope: Deactivated successfully.
Jan 31 03:08:01 np0005603663 podman[93090]: 2026-01-31 08:08:01.018927562 +0000 UTC m=+0.784733977 container create fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:08:01 np0005603663 podman[93090]: 2026-01-31 08:08:00.988091722 +0000 UTC m=+0.753898217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:01 np0005603663 systemd[1]: Started libpod-conmon-fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13.scope.
Jan 31 03:08:01 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 podman[93125]: 2026-01-31 08:08:01.120416837 +0000 UTC m=+0.078139030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:01 np0005603663 podman[93125]: 2026-01-31 08:08:01.225192066 +0000 UTC m=+0.182914199 container create 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:08:01 np0005603663 podman[93090]: 2026-01-31 08:08:01.269766628 +0000 UTC m=+1.035573103 container init fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:08:01 np0005603663 podman[93090]: 2026-01-31 08:08:01.278529308 +0000 UTC m=+1.044335733 container start fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:08:01 np0005603663 podman[93090]: 2026-01-31 08:08:01.34451644 +0000 UTC m=+1.110322875 container attach fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:01 np0005603663 systemd[1]: Started libpod-conmon-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope.
Jan 31 03:08:01 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:01 np0005603663 podman[93125]: 2026-01-31 08:08:01.540939904 +0000 UTC m=+0.498662047 container init 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:08:01 np0005603663 podman[93125]: 2026-01-31 08:08:01.547217933 +0000 UTC m=+0.504940046 container start 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:08:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:01 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:08:01 np0005603663 ceph-mgr[75519]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 03:08:01 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 03:08:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 03:08:01 np0005603663 podman[93125]: 2026-01-31 08:08:01.738970403 +0000 UTC m=+0.696692506 container attach 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:01 np0005603663 kind_napier[93141]: Scheduled mds.cephfs update...
Jan 31 03:08:01 np0005603663 systemd[1]: libpod-fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13.scope: Deactivated successfully.
Jan 31 03:08:01 np0005603663 podman[93090]: 2026-01-31 08:08:01.80264813 +0000 UTC m=+1.568454545 container died fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606-merged.mount: Deactivated successfully.
Jan 31 03:08:02 np0005603663 lvm[93259]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:08:02 np0005603663 lvm[93259]: VG ceph_vg1 finished
Jan 31 03:08:02 np0005603663 lvm[93256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:08:02 np0005603663 lvm[93256]: VG ceph_vg0 finished
Jan 31 03:08:02 np0005603663 lvm[93261]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:08:02 np0005603663 lvm[93261]: VG ceph_vg2 finished
Jan 31 03:08:02 np0005603663 silly_jemison[93166]: {}
Jan 31 03:08:02 np0005603663 systemd[1]: libpod-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope: Deactivated successfully.
Jan 31 03:08:02 np0005603663 systemd[1]: libpod-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope: Consumed 1.082s CPU time.
Jan 31 03:08:02 np0005603663 podman[93090]: 2026-01-31 08:08:02.430920383 +0000 UTC m=+2.196726778 container remove fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:02 np0005603663 systemd[1]: libpod-conmon-fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13.scope: Deactivated successfully.
Jan 31 03:08:02 np0005603663 podman[93125]: 2026-01-31 08:08:02.479820588 +0000 UTC m=+1.437542731 container died 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9-merged.mount: Deactivated successfully.
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: Saving service mds.cephfs spec with placement compute-0
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:03 np0005603663 podman[93264]: 2026-01-31 08:08:03.220164058 +0000 UTC m=+0.903776324 container remove 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:08:03 np0005603663 systemd[1]: libpod-conmon-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope: Deactivated successfully.
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:03 np0005603663 python3[93357]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:03 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:03 np0005603663 python3[93430]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846883.1898232-36878-228351024344322/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=5ead94c69bd1df72757f346af781128058784f3a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:08:04 np0005603663 python3[93586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:04 np0005603663 podman[93597]: 2026-01-31 08:08:04.321863675 +0000 UTC m=+0.214863210 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:08:05 np0005603663 podman[93611]: 2026-01-31 08:08:04.318670174 +0000 UTC m=+0.099614322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:05 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:05 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:05 np0005603663 podman[93611]: 2026-01-31 08:08:05.39924641 +0000 UTC m=+1.180190518 container create c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:05 np0005603663 systemd[1]: Started libpod-conmon-c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c.scope.
Jan 31 03:08:05 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fb36597048816fd878d8403fa53bd4d06dbceda77ccdd0f6f41b0c8c7be20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fb36597048816fd878d8403fa53bd4d06dbceda77ccdd0f6f41b0c8c7be20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:05 np0005603663 podman[93611]: 2026-01-31 08:08:05.859686406 +0000 UTC m=+1.640630564 container init c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:05 np0005603663 podman[93611]: 2026-01-31 08:08:05.865110451 +0000 UTC m=+1.646054579 container start c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 31 03:08:06 np0005603663 podman[93611]: 2026-01-31 08:08:06.013582636 +0000 UTC m=+1.794526784 container attach c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:06 np0005603663 podman[93597]: 2026-01-31 08:08:06.077409327 +0000 UTC m=+1.970408862 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:08:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 31 03:08:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 03:08:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 03:08:06 np0005603663 systemd[1]: libpod-c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c.scope: Deactivated successfully.
Jan 31 03:08:06 np0005603663 podman[93611]: 2026-01-31 08:08:06.457227842 +0000 UTC m=+2.238192071 container died c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:08:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-592fb36597048816fd878d8403fa53bd4d06dbceda77ccdd0f6f41b0c8c7be20-merged.mount: Deactivated successfully.
Jan 31 03:08:06 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 03:08:06 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 03:08:06 np0005603663 podman[93611]: 2026-01-31 08:08:06.833769604 +0000 UTC m=+2.614713742 container remove c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:06 np0005603663 systemd[1]: libpod-conmon-c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c.scope: Deactivated successfully.
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:07 np0005603663 python3[93848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:07 np0005603663 podman[93879]: 2026-01-31 08:08:07.722239129 +0000 UTC m=+0.108543128 container create 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:08:07 np0005603663 podman[93879]: 2026-01-31 08:08:07.637699547 +0000 UTC m=+0.024003586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:07 np0005603663 systemd[1]: Started libpod-conmon-292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0.scope.
Jan 31 03:08:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837265d097033e6971d21347b8ff5c0c83f349a6300c855a3163e22588645f8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837265d097033e6971d21347b8ff5c0c83f349a6300c855a3163e22588645f8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:07 np0005603663 podman[93879]: 2026-01-31 08:08:07.97290597 +0000 UTC m=+0.359210039 container init 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:07 np0005603663 podman[93879]: 2026-01-31 08:08:07.982055291 +0000 UTC m=+0.368359280 container start 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:07.938347284 +0000 UTC m=+0.200131040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:08 np0005603663 podman[93879]: 2026-01-31 08:08:08.104301168 +0000 UTC m=+0.490605127 container attach 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:08.176686543 +0000 UTC m=+0.438470299 container create f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:08:08 np0005603663 systemd[1]: Started libpod-conmon-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope.
Jan 31 03:08:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:08.257612582 +0000 UTC m=+0.519396388 container init f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:08.26456733 +0000 UTC m=+0.526351096 container start f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:08 np0005603663 unruffled_mestorf[93948]: 167 167
Jan 31 03:08:08 np0005603663 systemd[1]: libpod-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope: Deactivated successfully.
Jan 31 03:08:08 np0005603663 conmon[93948]: conmon f0fb042824d0f33e2aa3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope/container/memory.events
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:08.269798659 +0000 UTC m=+0.531582465 container attach f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:08.270154549 +0000 UTC m=+0.531938305 container died f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:08:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7eaabd533092cac6ae6846c663bfa7f58ac7329711d570dad79254f2aa564ba7-merged.mount: Deactivated successfully.
Jan 31 03:08:08 np0005603663 podman[93906]: 2026-01-31 08:08:08.310167121 +0000 UTC m=+0.571950847 container remove f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:08:08 np0005603663 systemd[1]: libpod-conmon-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope: Deactivated successfully.
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:08 np0005603663 podman[93973]: 2026-01-31 08:08:08.460888981 +0000 UTC m=+0.056278067 container create 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 03:08:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508594488' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 03:08:08 np0005603663 xenodochial_babbage[93921]: 
Jan 31 03:08:08 np0005603663 xenodochial_babbage[93921]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":115,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846853,"num_in_osds":3,"osd_in_since":1769846828,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83894272,"bytes_avail":64328032256,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-31T08:07:59:022862+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T08:07:33.658076+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 03:08:08 np0005603663 systemd[1]: Started libpod-conmon-3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb.scope.
Jan 31 03:08:08 np0005603663 podman[93879]: 2026-01-31 08:08:08.511456733 +0000 UTC m=+0.897760692 container died 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:08:08 np0005603663 systemd[1]: libpod-292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0.scope: Deactivated successfully.
Jan 31 03:08:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:08 np0005603663 podman[93973]: 2026-01-31 08:08:08.437322298 +0000 UTC m=+0.032711424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-837265d097033e6971d21347b8ff5c0c83f349a6300c855a3163e22588645f8e-merged.mount: Deactivated successfully.
Jan 31 03:08:08 np0005603663 podman[93973]: 2026-01-31 08:08:08.55902297 +0000 UTC m=+0.154412066 container init 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:08:08 np0005603663 podman[93973]: 2026-01-31 08:08:08.572927877 +0000 UTC m=+0.168316963 container start 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:08 np0005603663 podman[93879]: 2026-01-31 08:08:08.582377306 +0000 UTC m=+0.968681265 container remove 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 03:08:08 np0005603663 systemd[1]: libpod-conmon-292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0.scope: Deactivated successfully.
Jan 31 03:08:08 np0005603663 podman[93973]: 2026-01-31 08:08:08.596831539 +0000 UTC m=+0.192220695 container attach 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:08 np0005603663 python3[94031]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:08 np0005603663 podman[94041]: 2026-01-31 08:08:08.989152871 +0000 UTC m=+0.080152428 container create 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:09 np0005603663 modest_mestorf[93991]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:08:09 np0005603663 modest_mestorf[93991]: --> All data devices are unavailable
Jan 31 03:08:09 np0005603663 systemd[1]: libpod-3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb.scope: Deactivated successfully.
Jan 31 03:08:09 np0005603663 podman[94041]: 2026-01-31 08:08:08.947202024 +0000 UTC m=+0.038201561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:09 np0005603663 podman[93973]: 2026-01-31 08:08:09.109693889 +0000 UTC m=+0.705082975 container died 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:09 np0005603663 systemd[1]: Started libpod-conmon-8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3.scope.
Jan 31 03:08:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760e8ffa8e4c4b301165faf031c8a397fcff553872db6d413e2dbdf35ac2106/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760e8ffa8e4c4b301165faf031c8a397fcff553872db6d413e2dbdf35ac2106/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:09 np0005603663 podman[94041]: 2026-01-31 08:08:09.203941668 +0000 UTC m=+0.294941215 container init 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:09 np0005603663 podman[94041]: 2026-01-31 08:08:09.212354978 +0000 UTC m=+0.303354515 container start 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:09 np0005603663 podman[94041]: 2026-01-31 08:08:09.240848451 +0000 UTC m=+0.331848048 container attach 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0-merged.mount: Deactivated successfully.
Jan 31 03:08:09 np0005603663 podman[94060]: 2026-01-31 08:08:09.290331032 +0000 UTC m=+0.245164174 container remove 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:08:09 np0005603663 systemd[1]: libpod-conmon-3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb.scope: Deactivated successfully.
Jan 31 03:08:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.740409922 +0000 UTC m=+0.065647134 container create 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 03:08:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47939758' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 03:08:09 np0005603663 charming_nash[94075]: 
Jan 31 03:08:09 np0005603663 charming_nash[94075]: {"epoch":1,"fsid":"82c880e6-d992-5408-8b12-efff9c275473","modified":"2026-01-31T08:06:09.429767Z","created":"2026-01-31T08:06:09.429767Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 31 03:08:09 np0005603663 charming_nash[94075]: dumped monmap epoch 1
Jan 31 03:08:09 np0005603663 systemd[1]: libpod-8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3.scope: Deactivated successfully.
Jan 31 03:08:09 np0005603663 podman[94041]: 2026-01-31 08:08:09.758858568 +0000 UTC m=+0.849858075 container died 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 03:08:09 np0005603663 systemd[1]: Started libpod-conmon-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope.
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.695348176 +0000 UTC m=+0.020585398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7760e8ffa8e4c4b301165faf031c8a397fcff553872db6d413e2dbdf35ac2106-merged.mount: Deactivated successfully.
Jan 31 03:08:09 np0005603663 podman[94041]: 2026-01-31 08:08:09.818201721 +0000 UTC m=+0.909201238 container remove 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:09 np0005603663 systemd[1]: libpod-conmon-8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3.scope: Deactivated successfully.
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.834369562 +0000 UTC m=+0.159606834 container init 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.840762335 +0000 UTC m=+0.165999527 container start 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 31 03:08:09 np0005603663 objective_bartik[94190]: 167 167
Jan 31 03:08:09 np0005603663 systemd[1]: libpod-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope: Deactivated successfully.
Jan 31 03:08:09 np0005603663 conmon[94190]: conmon 5e0f923888ae588b7640 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope/container/memory.events
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.848492935 +0000 UTC m=+0.173730197 container attach 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.84899726 +0000 UTC m=+0.174234452 container died 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6b014380395d5209fcd0564845332ef1ca5987f0cd5f2af8d0c97cfd18c11d0c-merged.mount: Deactivated successfully.
Jan 31 03:08:09 np0005603663 podman[94161]: 2026-01-31 08:08:09.899466599 +0000 UTC m=+0.224703801 container remove 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:08:09 np0005603663 systemd[1]: libpod-conmon-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope: Deactivated successfully.
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.066524215 +0000 UTC m=+0.052739105 container create 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:10 np0005603663 systemd[1]: Started libpod-conmon-7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030.scope.
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.043401776 +0000 UTC m=+0.029616646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.165939221 +0000 UTC m=+0.152154101 container init 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.171474409 +0000 UTC m=+0.157689299 container start 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.176613966 +0000 UTC m=+0.162828866 container attach 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:10 np0005603663 python3[94264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:10 np0005603663 podman[94267]: 2026-01-31 08:08:10.392144034 +0000 UTC m=+0.047052903 container create 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:08:10 np0005603663 systemd[1]: Started libpod-conmon-2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5.scope.
Jan 31 03:08:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdd55939b38292a4f9af59befc93ba559c6991d1f50c4df81dc2ba4260196ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdd55939b38292a4f9af59befc93ba559c6991d1f50c4df81dc2ba4260196ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:10 np0005603663 podman[94267]: 2026-01-31 08:08:10.376351034 +0000 UTC m=+0.031259883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]: {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:    "0": [
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:        {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "devices": [
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "/dev/loop3"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            ],
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_name": "ceph_lv0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_size": "21470642176",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "name": "ceph_lv0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "tags": {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.crush_device_class": "",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.encrypted": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osd_id": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.type": "block",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.vdo": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.with_tpm": "0"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            },
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "type": "block",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "vg_name": "ceph_vg0"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:        }
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:    ],
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:    "1": [
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:        {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "devices": [
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "/dev/loop4"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            ],
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_name": "ceph_lv1",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_size": "21470642176",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "name": "ceph_lv1",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "tags": {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.crush_device_class": "",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.encrypted": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osd_id": "1",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.type": "block",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.vdo": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.with_tpm": "0"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            },
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "type": "block",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "vg_name": "ceph_vg1"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:        }
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:    ],
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:    "2": [
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:        {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "devices": [
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "/dev/loop5"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            ],
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_name": "ceph_lv2",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_size": "21470642176",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "name": "ceph_lv2",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "tags": {
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.crush_device_class": "",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.encrypted": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osd_id": "2",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.type": "block",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.vdo": "0",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:                "ceph.with_tpm": "0"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            },
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "type": "block",
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:            "vg_name": "ceph_vg2"
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:        }
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]:    ]
Jan 31 03:08:10 np0005603663 thirsty_benz[94234]: }
Jan 31 03:08:10 np0005603663 podman[94267]: 2026-01-31 08:08:10.476405648 +0000 UTC m=+0.131314567 container init 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:10 np0005603663 podman[94267]: 2026-01-31 08:08:10.481668828 +0000 UTC m=+0.136577697 container start 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:08:10 np0005603663 podman[94267]: 2026-01-31 08:08:10.484552121 +0000 UTC m=+0.139460950 container attach 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:08:10 np0005603663 systemd[1]: libpod-7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030.scope: Deactivated successfully.
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.49889427 +0000 UTC m=+0.485109110 container died 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:08:10 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826-merged.mount: Deactivated successfully.
Jan 31 03:08:10 np0005603663 podman[94218]: 2026-01-31 08:08:10.530694397 +0000 UTC m=+0.516909247 container remove 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:08:10 np0005603663 systemd[1]: libpod-conmon-7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030.scope: Deactivated successfully.
Jan 31 03:08:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 31 03:08:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1595703628' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 03:08:11 np0005603663 lucid_moore[94283]: [client.openstack]
Jan 31 03:08:11 np0005603663 lucid_moore[94283]: #011key = AQDNt31pAAAAABAAYp99ADqsmeg1iSEhkwiYUA==
Jan 31 03:08:11 np0005603663 lucid_moore[94283]: #011caps mgr = "allow *"
Jan 31 03:08:11 np0005603663 lucid_moore[94283]: #011caps mon = "profile rbd"
Jan 31 03:08:11 np0005603663 lucid_moore[94283]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 03:08:11 np0005603663 podman[94378]: 2026-01-31 08:08:10.924007707 +0000 UTC m=+0.019075735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:11 np0005603663 systemd[1]: libpod-2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5.scope: Deactivated successfully.
Jan 31 03:08:11 np0005603663 podman[94378]: 2026-01-31 08:08:11.100034848 +0000 UTC m=+0.195102856 container create a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:11 np0005603663 podman[94267]: 2026-01-31 08:08:11.10080528 +0000 UTC m=+0.755714149 container died 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:08:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ffdd55939b38292a4f9af59befc93ba559c6991d1f50c4df81dc2ba4260196ee-merged.mount: Deactivated successfully.
Jan 31 03:08:11 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/1595703628' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 03:08:11 np0005603663 podman[94267]: 2026-01-31 08:08:11.664688916 +0000 UTC m=+1.319597785 container remove 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:08:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:11 np0005603663 systemd[1]: libpod-conmon-2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5.scope: Deactivated successfully.
Jan 31 03:08:11 np0005603663 systemd[1]: Started libpod-conmon-a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786.scope.
Jan 31 03:08:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:12 np0005603663 podman[94378]: 2026-01-31 08:08:12.004733047 +0000 UTC m=+1.099801155 container init a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:08:12 np0005603663 podman[94378]: 2026-01-31 08:08:12.012583031 +0000 UTC m=+1.107651079 container start a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:12 np0005603663 recursing_cartwright[94409]: 167 167
Jan 31 03:08:12 np0005603663 systemd[1]: libpod-a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786.scope: Deactivated successfully.
Jan 31 03:08:12 np0005603663 podman[94378]: 2026-01-31 08:08:12.031685726 +0000 UTC m=+1.126753744 container attach a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:12 np0005603663 podman[94378]: 2026-01-31 08:08:12.032584101 +0000 UTC m=+1.127652119 container died a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:08:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-57b7e697e5c183e316d8e8ff38c2205d20278acc1ad8a198ed16c358189a5177-merged.mount: Deactivated successfully.
Jan 31 03:08:12 np0005603663 podman[94378]: 2026-01-31 08:08:12.572159684 +0000 UTC m=+1.667227722 container remove a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:08:12 np0005603663 systemd[1]: libpod-conmon-a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786.scope: Deactivated successfully.
Jan 31 03:08:12 np0005603663 podman[94477]: 2026-01-31 08:08:12.823958527 +0000 UTC m=+0.113070927 container create a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:08:12 np0005603663 podman[94477]: 2026-01-31 08:08:12.746194509 +0000 UTC m=+0.035306979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:12 np0005603663 systemd[1]: Started libpod-conmon-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope.
Jan 31 03:08:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:13 np0005603663 ansible-async_wrapper.py[94597]: Invoked with j723234989313 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.658375-36952-30002951411632/AnsiballZ_command.py _
Jan 31 03:08:13 np0005603663 ansible-async_wrapper.py[94605]: Starting module and watcher
Jan 31 03:08:13 np0005603663 ansible-async_wrapper.py[94605]: Start watching 94606 (30)
Jan 31 03:08:13 np0005603663 ansible-async_wrapper.py[94606]: Start module (94606)
Jan 31 03:08:13 np0005603663 ansible-async_wrapper.py[94597]: Return async_wrapper task started.
Jan 31 03:08:13 np0005603663 podman[94477]: 2026-01-31 08:08:13.17635095 +0000 UTC m=+0.465463370 container init a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:13 np0005603663 podman[94477]: 2026-01-31 08:08:13.183522174 +0000 UTC m=+0.472634574 container start a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:13 np0005603663 python3[94607]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:13 np0005603663 podman[94477]: 2026-01-31 08:08:13.267401777 +0000 UTC m=+0.556514177 container attach a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:08:13 np0005603663 podman[94610]: 2026-01-31 08:08:13.274906451 +0000 UTC m=+0.022971916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:13 np0005603663 podman[94610]: 2026-01-31 08:08:13.463628055 +0000 UTC m=+0.211693500 container create c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:08:13 np0005603663 systemd[1]: Started libpod-conmon-c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef.scope.
Jan 31 03:08:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a5438e68eec4c6e5feab48333881adbbfd63d6a762a69a09ce315cbc48741/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a5438e68eec4c6e5feab48333881adbbfd63d6a762a69a09ce315cbc48741/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:13 np0005603663 lvm[94702]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:08:13 np0005603663 lvm[94702]: VG ceph_vg0 finished
Jan 31 03:08:13 np0005603663 lvm[94703]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:08:13 np0005603663 lvm[94703]: VG ceph_vg1 finished
Jan 31 03:08:13 np0005603663 lvm[94705]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:08:13 np0005603663 lvm[94705]: VG ceph_vg2 finished
Jan 31 03:08:13 np0005603663 podman[94610]: 2026-01-31 08:08:13.893747506 +0000 UTC m=+0.641812981 container init c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:13 np0005603663 podman[94610]: 2026-01-31 08:08:13.902282709 +0000 UTC m=+0.650348174 container start c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:08:13 np0005603663 boring_sammet[94600]: {}
Jan 31 03:08:13 np0005603663 podman[94610]: 2026-01-31 08:08:13.944906735 +0000 UTC m=+0.692972210 container attach c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:08:13 np0005603663 systemd[1]: libpod-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope: Deactivated successfully.
Jan 31 03:08:13 np0005603663 systemd[1]: libpod-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope: Consumed 1.011s CPU time.
Jan 31 03:08:13 np0005603663 podman[94477]: 2026-01-31 08:08:13.952493241 +0000 UTC m=+1.241605651 container died a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:08:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721-merged.mount: Deactivated successfully.
Jan 31 03:08:14 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 03:08:14 np0005603663 elastic_chatterjee[94691]: 
Jan 31 03:08:14 np0005603663 elastic_chatterjee[94691]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 03:08:14 np0005603663 systemd[1]: libpod-c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef.scope: Deactivated successfully.
Jan 31 03:08:14 np0005603663 podman[94610]: 2026-01-31 08:08:14.439301149 +0000 UTC m=+1.187366764 container died c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:08:14 np0005603663 python3[94787]: ansible-ansible.legacy.async_status Invoked with jid=j723234989313.94597 mode=status _async_dir=/root/.ansible_async
Jan 31 03:08:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-309a5438e68eec4c6e5feab48333881adbbfd63d6a762a69a09ce315cbc48741-merged.mount: Deactivated successfully.
Jan 31 03:08:14 np0005603663 podman[94610]: 2026-01-31 08:08:14.712872522 +0000 UTC m=+1.460938007 container remove c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:14 np0005603663 ansible-async_wrapper.py[94606]: Module complete (94606)
Jan 31 03:08:14 np0005603663 podman[94477]: 2026-01-31 08:08:14.886686681 +0000 UTC m=+2.175799071 container remove a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:08:14 np0005603663 systemd[1]: libpod-conmon-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope: Deactivated successfully.
Jan 31 03:08:14 np0005603663 systemd[1]: libpod-conmon-c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef.scope: Deactivated successfully.
Jan 31 03:08:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:15 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev ff073104-e0de-4320-94d4-974f014c7b3e (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:15 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dnvgmk on compute-0
Jan 31 03:08:15 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dnvgmk on compute-0
Jan 31 03:08:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:15 np0005603663 python3[94900]: ansible-ansible.legacy.async_status Invoked with jid=j723234989313.94597 mode=status _async_dir=/root/.ansible_async
Jan 31 03:08:15 np0005603663 podman[94941]: 2026-01-31 08:08:15.811855524 +0000 UTC m=+0.022518374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:15 np0005603663 podman[94941]: 2026-01-31 08:08:15.980856805 +0000 UTC m=+0.191519655 container create 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:08:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 03:08:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 03:08:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:16 np0005603663 python3[95003]: ansible-ansible.legacy.async_status Invoked with jid=j723234989313.94597 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 03:08:16 np0005603663 systemd[1]: Started libpod-conmon-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope.
Jan 31 03:08:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:16 np0005603663 podman[94941]: 2026-01-31 08:08:16.398839969 +0000 UTC m=+0.609502899 container init 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:08:16 np0005603663 podman[94941]: 2026-01-31 08:08:16.407620179 +0000 UTC m=+0.618283029 container start 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:08:16 np0005603663 distracted_dijkstra[95006]: 167 167
Jan 31 03:08:16 np0005603663 systemd[1]: libpod-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope: Deactivated successfully.
Jan 31 03:08:16 np0005603663 conmon[95006]: conmon 60790b49ac87ccbf2aca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope/container/memory.events
Jan 31 03:08:16 np0005603663 podman[94941]: 2026-01-31 08:08:16.667124332 +0000 UTC m=+0.877787182 container attach 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 03:08:16 np0005603663 podman[94941]: 2026-01-31 08:08:16.667646167 +0000 UTC m=+0.878309007 container died 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 03:08:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9413820dcad20d90559af2c8295e93a343c318d11ad51ae6aa810383d974d2f7-merged.mount: Deactivated successfully.
Jan 31 03:08:16 np0005603663 python3[95048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:17 np0005603663 podman[94941]: 2026-01-31 08:08:17.134438444 +0000 UTC m=+1.345101294 container remove 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:08:17 np0005603663 systemd[1]: libpod-conmon-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope: Deactivated successfully.
Jan 31 03:08:17 np0005603663 podman[95051]: 2026-01-31 08:08:17.166431467 +0000 UTC m=+0.213931835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:17 np0005603663 ceph-mon[75227]: Deploying daemon rgw.rgw.compute-0.dnvgmk on compute-0
Jan 31 03:08:17 np0005603663 podman[95051]: 2026-01-31 08:08:17.443916563 +0000 UTC m=+0.491416901 container create a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:08:17 np0005603663 systemd[1]: Started libpod-conmon-a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc.scope.
Jan 31 03:08:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310357367d637cdb230b236e91349cf2a34251ece809c20d727638e1f9441d29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310357367d637cdb230b236e91349cf2a34251ece809c20d727638e1f9441d29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:17 np0005603663 podman[95051]: 2026-01-31 08:08:17.567450437 +0000 UTC m=+0.614950845 container init a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:08:17 np0005603663 podman[95051]: 2026-01-31 08:08:17.575934189 +0000 UTC m=+0.623434557 container start a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:17 np0005603663 podman[95051]: 2026-01-31 08:08:17.600699925 +0000 UTC m=+0.648200263 container attach a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:08:17 np0005603663 systemd[1]: Reloading.
Jan 31 03:08:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:17 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:08:17 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:08:17 np0005603663 systemd[1]: Reloading.
Jan 31 03:08:17 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:08:17 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:08:18 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 03:08:18 np0005603663 admiring_keller[95067]: 
Jan 31 03:08:18 np0005603663 admiring_keller[95067]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 03:08:18 np0005603663 podman[95051]: 2026-01-31 08:08:18.070553889 +0000 UTC m=+1.118054217 container died a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:18 np0005603663 ansible-async_wrapper.py[94605]: Done in kid B.
Jan 31 03:08:18 np0005603663 systemd[1]: libpod-a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc.scope: Deactivated successfully.
Jan 31 03:08:18 np0005603663 systemd[1]: Starting Ceph rgw.rgw.compute-0.dnvgmk for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:08:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-310357367d637cdb230b236e91349cf2a34251ece809c20d727638e1f9441d29-merged.mount: Deactivated successfully.
Jan 31 03:08:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:18 np0005603663 podman[95051]: 2026-01-31 08:08:18.763021182 +0000 UTC m=+1.810521530 container remove a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:18 np0005603663 systemd[1]: libpod-conmon-a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc.scope: Deactivated successfully.
Jan 31 03:08:18 np0005603663 podman[95230]: 2026-01-31 08:08:18.871532897 +0000 UTC m=+0.019718703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:18 np0005603663 podman[95230]: 2026-01-31 08:08:18.992626552 +0000 UTC m=+0.140812358 container create d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-rgw-rgw-compute-0-dnvgmk, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:08:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dnvgmk supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:19 np0005603663 podman[95230]: 2026-01-31 08:08:19.175501079 +0000 UTC m=+0.323686965 container init d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-rgw-rgw-compute-0-dnvgmk, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:08:19 np0005603663 podman[95230]: 2026-01-31 08:08:19.184715742 +0000 UTC m=+0.332901558 container start d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-rgw-rgw-compute-0-dnvgmk, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:19 np0005603663 radosgw[95251]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:08:19 np0005603663 radosgw[95251]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 31 03:08:19 np0005603663 radosgw[95251]: framework: beast
Jan 31 03:08:19 np0005603663 radosgw[95251]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 03:08:19 np0005603663 radosgw[95251]: init_numa not setting numa affinity
Jan 31 03:08:19 np0005603663 bash[95230]: d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb
Jan 31 03:08:19 np0005603663 systemd[1]: Started Ceph rgw.rgw.compute-0.dnvgmk for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:19 np0005603663 python3[95305]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 03:08:19 np0005603663 podman[95306]: 2026-01-31 08:08:19.771563044 +0000 UTC m=+0.043711428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:19 np0005603663 podman[95306]: 2026-01-31 08:08:19.960128703 +0000 UTC m=+0.232277027 container create 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev ff073104-e0de-4320-94d4-974f014c7b3e (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event ff073104-e0de-4320-94d4-974f014c7b3e (Updating rgw.rgw deployment (+1 -> 1)) in 5 seconds
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 03:08:20 np0005603663 systemd[1]: Started libpod-conmon-657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007.scope.
Jan 31 03:08:20 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936664a9fe5fa3dd7085565398026e9da6ad2fd574184ce4f7b90174ba9017d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936664a9fe5fa3dd7085565398026e9da6ad2fd574184ce4f7b90174ba9017d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 03:08:20 np0005603663 podman[95306]: 2026-01-31 08:08:20.386720183 +0000 UTC m=+0.658868537 container init 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:08:20 np0005603663 podman[95306]: 2026-01-31 08:08:20.395544865 +0000 UTC m=+0.667693179 container start 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:08:20 np0005603663 podman[95306]: 2026-01-31 08:08:20.428902955 +0000 UTC m=+0.701051279 container attach 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.nafbok on compute-0
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.nafbok on compute-0
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 03:08:20 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 03:08:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 03:08:20 np0005603663 eloquent_golick[95321]: 
Jan 31 03:08:20 np0005603663 eloquent_golick[95321]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 31 03:08:20 np0005603663 systemd[1]: libpod-657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007.scope: Deactivated successfully.
Jan 31 03:08:20 np0005603663 podman[95306]: 2026-01-31 08:08:20.864679967 +0000 UTC m=+1.136828251 container died 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: Saving service rgw.rgw spec with placement compute-0
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 03:08:21 np0005603663 systemd[1]: var-lib-containers-storage-overlay-936664a9fe5fa3dd7085565398026e9da6ad2fd574184ce4f7b90174ba9017d5-merged.mount: Deactivated successfully.
Jan 31 03:08:21 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 03:08:21 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:21 np0005603663 podman[95306]: 2026-01-31 08:08:21.581773104 +0000 UTC m=+1.853921408 container remove 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:08:21 np0005603663 systemd[1]: libpod-conmon-657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007.scope: Deactivated successfully.
Jan 31 03:08:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v80: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:21 np0005603663 podman[95446]: 2026-01-31 08:08:21.871660803 +0000 UTC m=+0.119689304 container create 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:21 np0005603663 podman[95446]: 2026-01-31 08:08:21.783391056 +0000 UTC m=+0.031419627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:21 np0005603663 systemd[1]: Started libpod-conmon-1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b.scope.
Jan 31 03:08:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 03:08:22 np0005603663 podman[95446]: 2026-01-31 08:08:22.14813396 +0000 UTC m=+0.396162561 container init 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:08:22 np0005603663 podman[95446]: 2026-01-31 08:08:22.156138939 +0000 UTC m=+0.404167450 container start 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:08:22 np0005603663 jolly_hopper[96022]: 167 167
Jan 31 03:08:22 np0005603663 systemd[1]: libpod-1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b.scope: Deactivated successfully.
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 03:08:22 np0005603663 podman[95446]: 2026-01-31 08:08:22.207469883 +0000 UTC m=+0.455498504 container attach 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:08:22 np0005603663 podman[95446]: 2026-01-31 08:08:22.208618026 +0000 UTC m=+0.456646587 container died 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: Deploying daemon mds.cephfs.compute-0.nafbok on compute-0
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 03:08:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-545a2788c1aa01335b5140d7307f95b2d61758c4b6e814f0bba2c35f7cd6a4ae-merged.mount: Deactivated successfully.
Jan 31 03:08:22 np0005603663 python3[96065]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:22 np0005603663 podman[95446]: 2026-01-31 08:08:22.688168096 +0000 UTC m=+0.936196637 container remove 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:08:22 np0005603663 systemd[1]: libpod-conmon-1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b.scope: Deactivated successfully.
Jan 31 03:08:22 np0005603663 podman[96069]: 2026-01-31 08:08:22.629226395 +0000 UTC m=+0.082054462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:22 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 4 completed events
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:08:22 np0005603663 podman[96069]: 2026-01-31 08:08:22.812410271 +0000 UTC m=+0.265238328 container create a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:08:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:22 np0005603663 ceph-mgr[75519]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 31 03:08:22 np0005603663 systemd[1]: Started libpod-conmon-a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf.scope.
Jan 31 03:08:22 np0005603663 systemd[1]: Reloading.
Jan 31 03:08:23 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:08:23 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:08:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 03:08:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f82d8bcf9d79c41461fc9c18de9cb15077da52165414439ac88d3e7acd4be7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f82d8bcf9d79c41461fc9c18de9cb15077da52165414439ac88d3e7acd4be7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:23 np0005603663 podman[96069]: 2026-01-31 08:08:23.216808017 +0000 UTC m=+0.669636104 container init a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:08:23 np0005603663 podman[96069]: 2026-01-31 08:08:23.225905317 +0000 UTC m=+0.678733374 container start a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:08:23 np0005603663 podman[96069]: 2026-01-31 08:08:23.229271043 +0000 UTC m=+0.682099140 container attach a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:08:23 np0005603663 systemd[1]: Reloading.
Jan 31 03:08:23 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:08:23 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 03:08:23 np0005603663 systemd[1]: Starting Ceph mds.cephfs.compute-0.nafbok for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 03:08:23 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 03:08:23 np0005603663 laughing_hofstadter[96087]: 
Jan 31 03:08:23 np0005603663 laughing_hofstadter[96087]: [{"container_id": "a94e6142bb25", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.25%", "created": "2026-01-31T08:06:53.646207Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T08:06:53.716669Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.361962Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-31T08:06:53.532959Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@crash.compute-0", "version": "20.2.0"}, {"container_id": "469c441ebd04", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.40%", "created": "2026-01-31T08:06:14.835074Z", "daemon_id": "compute-0.fqetdi", "daemon_name": "mgr.compute-0.fqetdi", "daemon_type": "mgr", "events": ["2026-01-31T08:06:58.210393Z daemon:mgr.compute-0.fqetdi [INFO] \"Reconfigured mgr.compute-0.fqetdi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.361809Z", "memory_usage": 548090675, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T08:06:14.759640Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.fqetdi", "version": "20.2.0"}, {"container_id": "2c160fb9852a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.58%", "created": "2026-01-31T08:06:11.262972Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T08:06:57.580410Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.361630Z", "memory_request": 2147483648, "memory_usage": 39405486, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-31T08:06:13.128980Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@mon.compute-0", "version": "20.2.0"}, {"container_id": "a780c474029a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.59%", "created": "2026-01-31T08:07:15.518432Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T08:07:15.601787Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.362114Z", "memory_request": 4294967296, "memory_usage": 58961428, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T08:07:15.392122Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@osd.0", "version": "20.2.0"}, {"container_id": "679fb36577e7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.75%", "created": "2026-01-31T08:07:20.299569Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T08:07:20.444272Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.362289Z", "memory_request": 4294967296, "memory_usage": 58038681, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T08:07:20.084069Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@osd.1", "version": "20.2.0"}, {"container_id": "b5c171002b43", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.92%", "created": "2026-01-31T08:07:26.413743Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T08:07:27.320070Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.362467Z", "memory_request": 4294967296, "memory_usage": 56423874, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T08:07:25.815376Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@osd.2", "version": "20.2.0"}, {"daemon_id": "rgw.compute-0.dnvgmk", "daemon_name": "rgw.rgw.compute-0.dnvgmk", "daemon_type": "rgw", "events": ["2026-01-31T08:08:19.959068Z daemon:rgw.rgw.compute-0.dnvgmk [INFO] \"Deployed rgw.rgw.compute-0.dnvgmk on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 31 03:08:23 np0005603663 systemd[1]: libpod-a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf.scope: Deactivated successfully.
Jan 31 03:08:23 np0005603663 podman[96069]: 2026-01-31 08:08:23.620540805 +0000 UTC m=+1.073368862 container died a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-93f82d8bcf9d79c41461fc9c18de9cb15077da52165414439ac88d3e7acd4be7-merged.mount: Deactivated successfully.
Jan 31 03:08:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v83: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:23 np0005603663 podman[96069]: 2026-01-31 08:08:23.672992581 +0000 UTC m=+1.125820628 container remove a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:08:23 np0005603663 systemd[1]: libpod-conmon-a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf.scope: Deactivated successfully.
Jan 31 03:08:23 np0005603663 podman[96247]: 2026-01-31 08:08:23.72832949 +0000 UTC m=+0.042111073 container create 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:08:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.nafbok supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603663 podman[96247]: 2026-01-31 08:08:23.709014128 +0000 UTC m=+0.022795731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:23 np0005603663 podman[96247]: 2026-01-31 08:08:23.812966684 +0000 UTC m=+0.126748307 container init 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:08:23 np0005603663 podman[96247]: 2026-01-31 08:08:23.819229823 +0000 UTC m=+0.133011426 container start 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:08:23 np0005603663 bash[96247]: 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5
Jan 31 03:08:23 np0005603663 systemd[1]: Started Ceph mds.cephfs.compute-0.nafbok for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 03:08:23 np0005603663 ceph-mds[96266]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:08:23 np0005603663 ceph-mds[96266]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 31 03:08:23 np0005603663 ceph-mds[96266]: main not setting numa affinity
Jan 31 03:08:23 np0005603663 ceph-mds[96266]: pidfile_write: ignore empty --pid-file
Jan 31 03:08:23 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok[96262]: starting mds.cephfs.compute-0.nafbok at 
Jan 31 03:08:23 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 2 from mon.0
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:23 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 03:08:23 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 03:08:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 03:08:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 new map
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-31T08:08:24:405775+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T08:07:59.022433+0000#012modified#0112026-01-31T08:07:59.022433+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nafbok{-1:14262} state up:standby seq 1 addr [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] compat {c=[1],r=[1],i=[1fff]}]
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 3 from mon.0
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Monitors have assigned me to become a standby
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] up:boot
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] as mds.0
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.nafbok assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.nafbok"} v 0)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.nafbok"} : dispatch
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e4 new map
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-31T08:08:24:412009+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T08:07:59.022433+0000#012modified#0112026-01-31T08:08:24.412002+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14262}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.nafbok{0:14262} state up:creating seq 1 addr [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 4 from mon.0
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x1
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x100
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x600
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x601
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x602
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x603
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x604
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x605
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x606
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x607
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.nafbok=up:creating}
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x608
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x609
Jan 31 03:08:24 np0005603663 ceph-mds[96266]: mds.0.4 creating_done
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.nafbok is now active in filesystem cephfs as rank 0
Jan 31 03:08:24 np0005603663 podman[96427]: 2026-01-31 08:08:24.476113752 +0000 UTC m=+0.057031778 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:24 np0005603663 python3[96428]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:24 np0005603663 podman[96457]: 2026-01-31 08:08:24.59452751 +0000 UTC m=+0.044884281 container create f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:08:24 np0005603663 podman[96463]: 2026-01-31 08:08:24.61240774 +0000 UTC m=+0.049816612 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:08:24 np0005603663 podman[96427]: 2026-01-31 08:08:24.625949207 +0000 UTC m=+0.206867233 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:24 np0005603663 systemd[1]: Started libpod-conmon-f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca.scope.
Jan 31 03:08:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9714d8c02f5f3481cbbc3bcee81b929f45fab98e9fc954b32dd410113373b7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:24 np0005603663 podman[96457]: 2026-01-31 08:08:24.575429155 +0000 UTC m=+0.025785916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9714d8c02f5f3481cbbc3bcee81b929f45fab98e9fc954b32dd410113373b7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:24 np0005603663 podman[96457]: 2026-01-31 08:08:24.692928247 +0000 UTC m=+0.143285008 container init f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:08:24 np0005603663 podman[96457]: 2026-01-31 08:08:24.69861616 +0000 UTC m=+0.148972891 container start f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:24 np0005603663 podman[96457]: 2026-01-31 08:08:24.715666216 +0000 UTC m=+0.166022977 container attach f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: daemon mds.cephfs.compute-0.nafbok assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: Cluster is now healthy
Jan 31 03:08:24 np0005603663 ceph-mon[75227]: daemon mds.cephfs.compute-0.nafbok is now active in filesystem cephfs as rank 0
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519348361' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 03:08:25 np0005603663 zen_mclaren[96486]: 
Jan 31 03:08:25 np0005603663 zen_mclaren[96486]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":131,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846853,"num_in_osds":3,"osd_in_since":1769846828,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":83894272,"bytes_avail":64328032256,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"inactive_pgs_ratio":0.1111111119389534},"fsmap":{"epoch":4,"btime":"2026-01-31T08:08:24:412009+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.nafbok","status":"up:creating","gid":14262}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T08:07:33.658076+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"d5a72d36-5e9a-4289-8d07-2ee4a9e0f4d5":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 31 03:08:25 np0005603663 systemd[1]: libpod-f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca.scope: Deactivated successfully.
Jan 31 03:08:25 np0005603663 podman[96457]: 2026-01-31 08:08:25.227887019 +0000 UTC m=+0.678243750 container died f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:08:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a9714d8c02f5f3481cbbc3bcee81b929f45fab98e9fc954b32dd410113373b7d-merged.mount: Deactivated successfully.
Jan 31 03:08:25 np0005603663 podman[96457]: 2026-01-31 08:08:25.321318004 +0000 UTC m=+0.771674745 container remove f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:25 np0005603663 systemd[1]: libpod-conmon-f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca.scope: Deactivated successfully.
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e5 new map
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-31T08:08:25:416196+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T08:07:59.022433+0000#012modified#0112026-01-31T08:08:25.416194+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14262}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14262 members: 14262#012[mds.cephfs.compute-0.nafbok{0:14262} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 31 03:08:25 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 5 from mon.0
Jan 31 03:08:25 np0005603663 ceph-mds[96266]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 03:08:25 np0005603663 ceph-mds[96266]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 31 03:08:25 np0005603663 ceph-mds[96266]: mds.0.4 recovery_done -- successful recovery!
Jan 31 03:08:25 np0005603663 ceph-mds[96266]: mds.0.4 active_start
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] up:active
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.nafbok=up:active}
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v86: 10 pgs: 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.778210087 +0000 UTC m=+0.038301684 container create 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:25 np0005603663 systemd[1]: Started libpod-conmon-5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d.scope.
Jan 31 03:08:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.858894979 +0000 UTC m=+0.118986606 container init 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.763716594 +0000 UTC m=+0.023808211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.866188857 +0000 UTC m=+0.126280454 container start 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.869671696 +0000 UTC m=+0.129763333 container attach 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:08:25 np0005603663 gifted_shaw[96763]: 167 167
Jan 31 03:08:25 np0005603663 systemd[1]: libpod-5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d.scope: Deactivated successfully.
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.871983032 +0000 UTC m=+0.132074619 container died 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:08:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7fc817526de99fb614cd67e5da6ec87cf853d2716653e718f4b463836697c92a-merged.mount: Deactivated successfully.
Jan 31 03:08:25 np0005603663 podman[96747]: 2026-01-31 08:08:25.911570541 +0000 UTC m=+0.171662128 container remove 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:08:25 np0005603663 systemd[1]: libpod-conmon-5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d.scope: Deactivated successfully.
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.079232194 +0000 UTC m=+0.053892488 container create a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:08:26 np0005603663 systemd[1]: Started libpod-conmon-a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac.scope.
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.053378347 +0000 UTC m=+0.028038631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.177811607 +0000 UTC m=+0.152471931 container init a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.186021921 +0000 UTC m=+0.160682185 container start a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.189532811 +0000 UTC m=+0.164193115 container attach a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:08:26 np0005603663 python3[96823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 03:08:26 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.242863182 +0000 UTC m=+0.037357786 container create 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:08:26 np0005603663 systemd[1]: Started libpod-conmon-197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32.scope.
Jan 31 03:08:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94fa871abc0b8a9029ad7035fc6909492f0eb407dde1828c984326f8c51a435/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94fa871abc0b8a9029ad7035fc6909492f0eb407dde1828c984326f8c51a435/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.227961577 +0000 UTC m=+0.022456151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.325276133 +0000 UTC m=+0.119770727 container init 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.330835772 +0000 UTC m=+0.125330346 container start 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.334997421 +0000 UTC m=+0.129492005 container attach 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:08:26 np0005603663 blissful_keller[96828]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:08:26 np0005603663 blissful_keller[96828]: --> All data devices are unavailable
Jan 31 03:08:26 np0005603663 systemd[1]: libpod-a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac.scope: Deactivated successfully.
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.603394257 +0000 UTC m=+0.578054631 container died a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:26 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1-merged.mount: Deactivated successfully.
Jan 31 03:08:26 np0005603663 podman[96798]: 2026-01-31 08:08:26.64484158 +0000 UTC m=+0.619501844 container remove a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:08:26 np0005603663 systemd[1]: libpod-conmon-a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac.scope: Deactivated successfully.
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 03:08:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875625571' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 03:08:26 np0005603663 recursing_ganguly[96849]: 
Jan 31 03:08:26 np0005603663 systemd[1]: libpod-197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32.scope: Deactivated successfully.
Jan 31 03:08:26 np0005603663 recursing_ganguly[96849]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dnvgmk","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.796110835 +0000 UTC m=+0.590605399 container died 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:26 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a94fa871abc0b8a9029ad7035fc6909492f0eb407dde1828c984326f8c51a435-merged.mount: Deactivated successfully.
Jan 31 03:08:26 np0005603663 podman[96833]: 2026-01-31 08:08:26.835512449 +0000 UTC m=+0.630007013 container remove 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:26 np0005603663 systemd[1]: libpod-conmon-197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32.scope: Deactivated successfully.
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.027864486 +0000 UTC m=+0.054567187 container create 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:08:27 np0005603663 systemd[1]: Started libpod-conmon-7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2.scope.
Jan 31 03:08:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.001427022 +0000 UTC m=+0.028129733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.103237726 +0000 UTC m=+0.129940407 container init 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.110623317 +0000 UTC m=+0.137325998 container start 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:08:27 np0005603663 nervous_leavitt[96990]: 167 167
Jan 31 03:08:27 np0005603663 systemd[1]: libpod-7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2.scope: Deactivated successfully.
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.12088798 +0000 UTC m=+0.147590691 container attach 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.121633201 +0000 UTC m=+0.148335882 container died 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-deebfde82260f6ce7332c51ce1af5950230ecddfda4be8c337aa7a6de68a55be-merged.mount: Deactivated successfully.
Jan 31 03:08:27 np0005603663 podman[96975]: 2026-01-31 08:08:27.197734632 +0000 UTC m=+0.224437343 container remove 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:08:27 np0005603663 systemd[1]: libpod-conmon-7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2.scope: Deactivated successfully.
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 03:08:27 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.363662896 +0000 UTC m=+0.050540563 container create e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:27 np0005603663 systemd[1]: Started libpod-conmon-e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df.scope.
Jan 31 03:08:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.336965494 +0000 UTC m=+0.023843151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.466313014 +0000 UTC m=+0.153190661 container init e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.476206936 +0000 UTC m=+0.163084603 container start e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.488534858 +0000 UTC m=+0.175412485 container attach e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 1 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Jan 31 03:08:27 np0005603663 python3[97062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:27 np0005603663 hungry_germain[97032]: {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:    "0": [
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:        {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "devices": [
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "/dev/loop3"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            ],
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_name": "ceph_lv0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_size": "21470642176",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "name": "ceph_lv0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "tags": {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.crush_device_class": "",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.encrypted": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osd_id": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.type": "block",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.vdo": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.with_tpm": "0"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            },
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "type": "block",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "vg_name": "ceph_vg0"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:        }
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:    ],
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:    "1": [
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:        {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "devices": [
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "/dev/loop4"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            ],
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_name": "ceph_lv1",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_size": "21470642176",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "name": "ceph_lv1",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "tags": {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.crush_device_class": "",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.encrypted": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osd_id": "1",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.type": "block",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.vdo": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.with_tpm": "0"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            },
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "type": "block",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "vg_name": "ceph_vg1"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:        }
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:    ],
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:    "2": [
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:        {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "devices": [
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "/dev/loop5"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            ],
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_name": "ceph_lv2",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_size": "21470642176",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "name": "ceph_lv2",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "tags": {
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.crush_device_class": "",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.encrypted": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osd_id": "2",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.type": "block",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.vdo": "0",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:                "ceph.with_tpm": "0"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            },
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "type": "block",
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:            "vg_name": "ceph_vg2"
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:        }
Jan 31 03:08:27 np0005603663 hungry_germain[97032]:    ]
Jan 31 03:08:27 np0005603663 hungry_germain[97032]: }
Jan 31 03:08:27 np0005603663 systemd[1]: libpod-e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df.scope: Deactivated successfully.
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.761508785 +0000 UTC m=+0.448386422 container died e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:08:27 np0005603663 podman[97067]: 2026-01-31 08:08:27.785740706 +0000 UTC m=+0.063209115 container create d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634-merged.mount: Deactivated successfully.
Jan 31 03:08:27 np0005603663 systemd[1]: Started libpod-conmon-d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4.scope.
Jan 31 03:08:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bd4bd7835ac7e7ac6557e1d39918898d4759ff20570b9e61cd3e5034d6d30a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bd4bd7835ac7e7ac6557e1d39918898d4759ff20570b9e61cd3e5034d6d30a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:27 np0005603663 podman[97067]: 2026-01-31 08:08:27.755298738 +0000 UTC m=+0.032767176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:27 np0005603663 podman[97067]: 2026-01-31 08:08:27.856789553 +0000 UTC m=+0.134257991 container init d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:27 np0005603663 podman[97067]: 2026-01-31 08:08:27.860231281 +0000 UTC m=+0.137699689 container start d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:08:27 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 5 completed events
Jan 31 03:08:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:08:27 np0005603663 podman[97016]: 2026-01-31 08:08:27.911681819 +0000 UTC m=+0.598559456 container remove e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:27 np0005603663 systemd[1]: libpod-conmon-e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df.scope: Deactivated successfully.
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:28 np0005603663 podman[97067]: 2026-01-31 08:08:28.048356928 +0000 UTC m=+0.325825336 container attach d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466039113' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 03:08:28 np0005603663 sharp_mclaren[97096]: mimic
Jan 31 03:08:28 np0005603663 systemd[1]: libpod-d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4.scope: Deactivated successfully.
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.301408837 +0000 UTC m=+0.046517358 container create 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:28 np0005603663 podman[97067]: 2026-01-31 08:08:28.302746695 +0000 UTC m=+0.580215133 container died d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.272946005 +0000 UTC m=+0.018054536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:28 np0005603663 systemd[1]: Started libpod-conmon-89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44.scope.
Jan 31 03:08:28 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f3bd4bd7835ac7e7ac6557e1d39918898d4759ff20570b9e61cd3e5034d6d30a-merged.mount: Deactivated successfully.
Jan 31 03:08:28 np0005603663 podman[97067]: 2026-01-31 08:08:28.442463481 +0000 UTC m=+0.719931899 container remove d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:08:28 np0005603663 systemd[1]: libpod-conmon-d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4.scope: Deactivated successfully.
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.460941378 +0000 UTC m=+0.206049899 container init 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.46555632 +0000 UTC m=+0.210664881 container start 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:08:28 np0005603663 sleepy_shirley[97211]: 167 167
Jan 31 03:08:28 np0005603663 systemd[1]: libpod-89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44.scope: Deactivated successfully.
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.486724624 +0000 UTC m=+0.231833145 container attach 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.488667349 +0000 UTC m=+0.233775900 container died 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-47d38caf2157904dd06049ddb11643817cc7dd039d9c769908167a9e70834e5b-merged.mount: Deactivated successfully.
Jan 31 03:08:28 np0005603663 podman[97181]: 2026-01-31 08:08:28.57667535 +0000 UTC m=+0.321783871 container remove 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:28 np0005603663 systemd[1]: libpod-conmon-89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44.scope: Deactivated successfully.
Jan 31 03:08:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:28 np0005603663 podman[97252]: 2026-01-31 08:08:28.743403236 +0000 UTC m=+0.065816848 container create 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:08:28 np0005603663 systemd[1]: Started libpod-conmon-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope.
Jan 31 03:08:28 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:28 np0005603663 podman[97252]: 2026-01-31 08:08:28.712762772 +0000 UTC m=+0.035176424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:28 np0005603663 podman[97252]: 2026-01-31 08:08:28.839330193 +0000 UTC m=+0.161743805 container init 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:28 np0005603663 podman[97252]: 2026-01-31 08:08:28.844777798 +0000 UTC m=+0.167191410 container start 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:08:28 np0005603663 podman[97252]: 2026-01-31 08:08:28.85710707 +0000 UTC m=+0.179520702 container attach 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:08:28 np0005603663 radosgw[95251]: v1 topic migration: starting v1 topic migration..
Jan 31 03:08:28 np0005603663 radosgw[95251]: v1 topic migration: finished v1 topic migration
Jan 31 03:08:28 np0005603663 radosgw[95251]: framework: beast
Jan 31 03:08:28 np0005603663 radosgw[95251]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 03:08:28 np0005603663 radosgw[95251]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 03:08:28 np0005603663 radosgw[95251]: starting handler: beast
Jan 31 03:08:29 np0005603663 radosgw[95251]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 03:08:29 np0005603663 radosgw[95251]: mgrc service_daemon_register rgw.14258 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dnvgmk,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=e5ce1d65-2430-4be3-9ca0-9683547c77a5,zone_name=default,zonegroup_id=2c8897d5-67a2-451c-a710-7a7bae68fa34,zonegroup_name=default}
Jan 31 03:08:29 np0005603663 python3[97334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:08:29 np0005603663 podman[97374]: 2026-01-31 08:08:29.339558522 +0000 UTC m=+0.043433490 container create 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:29 np0005603663 systemd[1]: Started libpod-conmon-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope.
Jan 31 03:08:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1c4366617768d4128969bc95e32e43c90044f30551ec9e4959fe39db583d7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1c4366617768d4128969bc95e32e43c90044f30551ec9e4959fe39db583d7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:29 np0005603663 podman[97374]: 2026-01-31 08:08:29.320365595 +0000 UTC m=+0.024240583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:08:29 np0005603663 podman[97374]: 2026-01-31 08:08:29.420769079 +0000 UTC m=+0.124644067 container init 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:29 np0005603663 ceph-mds[96266]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 03:08:29 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok[96262]: 2026-01-31T08:08:29.421+0000 7f010199b640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 03:08:29 np0005603663 podman[97374]: 2026-01-31 08:08:29.426155203 +0000 UTC m=+0.130030171 container start 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:08:29 np0005603663 podman[97374]: 2026-01-31 08:08:29.431917467 +0000 UTC m=+0.135792465 container attach 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:29 np0005603663 lvm[97408]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:08:29 np0005603663 lvm[97408]: VG ceph_vg0 finished
Jan 31 03:08:29 np0005603663 lvm[97411]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:08:29 np0005603663 lvm[97411]: VG ceph_vg1 finished
Jan 31 03:08:29 np0005603663 lvm[97413]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:08:29 np0005603663 lvm[97413]: VG ceph_vg2 finished
Jan 31 03:08:29 np0005603663 angry_meitner[97268]: {}
Jan 31 03:08:29 np0005603663 systemd[1]: libpod-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope: Deactivated successfully.
Jan 31 03:08:29 np0005603663 conmon[97268]: conmon 87798c6306dd254a0bfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope/container/memory.events
Jan 31 03:08:29 np0005603663 podman[97435]: 2026-01-31 08:08:29.621744832 +0000 UTC m=+0.023823910 container died 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:08:29 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3-merged.mount: Deactivated successfully.
Jan 31 03:08:29 np0005603663 podman[97435]: 2026-01-31 08:08:29.669212656 +0000 UTC m=+0.071291734 container remove 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:29 np0005603663 systemd[1]: libpod-conmon-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope: Deactivated successfully.
Jan 31 03:08:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.0 KiB/s wr, 4 op/s
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 31 03:08:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413874688' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 03:08:29 np0005603663 zen_elion[97401]: 
Jan 31 03:08:29 np0005603663 systemd[1]: libpod-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope: Deactivated successfully.
Jan 31 03:08:29 np0005603663 conmon[97401]: conmon 661767da94bb0acb8f2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope/container/memory.events
Jan 31 03:08:29 np0005603663 zen_elion[97401]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Jan 31 03:08:29 np0005603663 podman[97374]: 2026-01-31 08:08:29.989487853 +0000 UTC m=+0.693362921 container died 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:08:30 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9e1c4366617768d4128969bc95e32e43c90044f30551ec9e4959fe39db583d7f-merged.mount: Deactivated successfully.
Jan 31 03:08:30 np0005603663 podman[97374]: 2026-01-31 08:08:30.036145194 +0000 UTC m=+0.740020192 container remove 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:08:30 np0005603663 systemd[1]: libpod-conmon-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope: Deactivated successfully.
Jan 31 03:08:30 np0005603663 podman[97584]: 2026-01-31 08:08:30.270332655 +0000 UTC m=+0.052476468 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:08:30 np0005603663 podman[97584]: 2026-01-31 08:08:30.38058851 +0000 UTC m=+0.162732263 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:08:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.440412774 +0000 UTC m=+0.041076543 container create 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:31 np0005603663 systemd[1]: Started libpod-conmon-26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e.scope.
Jan 31 03:08:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.422889154 +0000 UTC m=+0.023552953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.525370148 +0000 UTC m=+0.126033997 container init 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.533276883 +0000 UTC m=+0.133940642 container start 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.536357051 +0000 UTC m=+0.137020830 container attach 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:31 np0005603663 heuristic_meitner[97852]: 167 167
Jan 31 03:08:31 np0005603663 systemd[1]: libpod-26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e.scope: Deactivated successfully.
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.559007107 +0000 UTC m=+0.159670886 container died 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-603a017ce25e9aa748c2cf08af0559ab611dea935f3b20292d30f687a39008f2-merged.mount: Deactivated successfully.
Jan 31 03:08:31 np0005603663 podman[97836]: 2026-01-31 08:08:31.600021527 +0000 UTC m=+0.200685296 container remove 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:31 np0005603663 systemd[1]: libpod-conmon-26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e.scope: Deactivated successfully.
Jan 31 03:08:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:08:31
Jan 31 03:08:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:08:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Some PGs (0.090909) are unknown; try again later
Jan 31 03:08:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 236 op/s
Jan 31 03:08:31 np0005603663 podman[97876]: 2026-01-31 08:08:31.791225682 +0000 UTC m=+0.058324755 container create 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:31 np0005603663 systemd[1]: Started libpod-conmon-9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda.scope.
Jan 31 03:08:31 np0005603663 podman[97876]: 2026-01-31 08:08:31.766478236 +0000 UTC m=+0.033577359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:31 np0005603663 podman[97876]: 2026-01-31 08:08:31.904690909 +0000 UTC m=+0.171790022 container init 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:31 np0005603663 podman[97876]: 2026-01-31 08:08:31.921619112 +0000 UTC m=+0.188718175 container start 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:31 np0005603663 podman[97876]: 2026-01-31 08:08:31.925880933 +0000 UTC m=+0.192980046 container attach 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:08:32 np0005603663 romantic_faraday[97893]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:08:32 np0005603663 romantic_faraday[97893]: --> All data devices are unavailable
Jan 31 03:08:32 np0005603663 systemd[1]: libpod-9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda.scope: Deactivated successfully.
Jan 31 03:08:32 np0005603663 podman[97876]: 2026-01-31 08:08:32.422874661 +0000 UTC m=+0.689973724 container died 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e-merged.mount: Deactivated successfully.
Jan 31 03:08:32 np0005603663 podman[97876]: 2026-01-31 08:08:32.472167448 +0000 UTC m=+0.739266491 container remove 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:32 np0005603663 systemd[1]: libpod-conmon-9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda.scope: Deactivated successfully.
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.48873506906078e-07 of space, bias 4.0, pg target 0.0006586482082872936 quantized to 16 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:32 np0005603663 podman[97988]: 2026-01-31 08:08:32.939807557 +0000 UTC m=+0.040099515 container create 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:32 np0005603663 systemd[1]: Started libpod-conmon-41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b.scope.
Jan 31 03:08:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:32 np0005603663 podman[97988]: 2026-01-31 08:08:32.994148557 +0000 UTC m=+0.094440565 container init 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 03:08:33 np0005603663 podman[97988]: 2026-01-31 08:08:33.003288058 +0000 UTC m=+0.103580006 container start 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:08:33 np0005603663 podman[97988]: 2026-01-31 08:08:33.007334244 +0000 UTC m=+0.107626242 container attach 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:33 np0005603663 unruffled_bouman[98005]: 167 167
Jan 31 03:08:33 np0005603663 systemd[1]: libpod-41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b.scope: Deactivated successfully.
Jan 31 03:08:33 np0005603663 podman[97988]: 2026-01-31 08:08:33.009025602 +0000 UTC m=+0.109317560 container died 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:08:33 np0005603663 podman[97988]: 2026-01-31 08:08:32.91991531 +0000 UTC m=+0.020207258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:33 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event d5a72d36-5e9a-4289-8d07-2ee4a9e0f4d5 (Global Recovery Event) in 10 seconds
Jan 31 03:08:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-775d39ad38fb8074f8995d8b655083d712500cf0787231f3b7871f8133e3207a-merged.mount: Deactivated successfully.
Jan 31 03:08:33 np0005603663 podman[97988]: 2026-01-31 08:08:33.048866608 +0000 UTC m=+0.149158566 container remove 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:33 np0005603663 systemd[1]: libpod-conmon-41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b.scope: Deactivated successfully.
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 03:08:33 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 1f2c8d8f-218f-4e97-9265-e014252ce84a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.217924951 +0000 UTC m=+0.052070466 container create 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:33 np0005603663 systemd[1]: Started libpod-conmon-351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337.scope.
Jan 31 03:08:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.198580489 +0000 UTC m=+0.032726054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.31253246 +0000 UTC m=+0.146678015 container init 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.320350583 +0000 UTC m=+0.154496138 container start 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.324766379 +0000 UTC m=+0.158911934 container attach 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]: {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:    "0": [
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:        {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "devices": [
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "/dev/loop3"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            ],
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_name": "ceph_lv0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_size": "21470642176",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "name": "ceph_lv0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "tags": {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.crush_device_class": "",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.encrypted": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osd_id": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.type": "block",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.vdo": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.with_tpm": "0"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            },
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "type": "block",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "vg_name": "ceph_vg0"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:        }
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:    ],
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:    "1": [
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:        {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "devices": [
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "/dev/loop4"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            ],
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_name": "ceph_lv1",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_size": "21470642176",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "name": "ceph_lv1",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "tags": {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.crush_device_class": "",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.encrypted": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osd_id": "1",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.type": "block",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.vdo": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.with_tpm": "0"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            },
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "type": "block",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "vg_name": "ceph_vg1"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:        }
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:    ],
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:    "2": [
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:        {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "devices": [
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "/dev/loop5"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            ],
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_name": "ceph_lv2",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_size": "21470642176",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "name": "ceph_lv2",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "tags": {
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.crush_device_class": "",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.encrypted": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.objectstore": "bluestore",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osd_id": "2",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.type": "block",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.vdo": "0",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:                "ceph.with_tpm": "0"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            },
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "type": "block",
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:            "vg_name": "ceph_vg2"
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:        }
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]:    ]
Jan 31 03:08:33 np0005603663 wonderful_bell[98045]: }
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:33 np0005603663 systemd[1]: libpod-351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337.scope: Deactivated successfully.
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.632660523 +0000 UTC m=+0.466806078 container died 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:08:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a-merged.mount: Deactivated successfully.
Jan 31 03:08:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 9.9 KiB/s wr, 220 op/s
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:33 np0005603663 podman[98029]: 2026-01-31 08:08:33.70091729 +0000 UTC m=+0.535062825 container remove 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:08:33 np0005603663 systemd[1]: libpod-conmon-351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337.scope: Deactivated successfully.
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 03:08:34 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev bea3075c-4cd0-4ea7-a2f9-012dea3b16ab (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:34 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:34 np0005603663 podman[98130]: 2026-01-31 08:08:34.153856571 +0000 UTC m=+0.036636816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:34 np0005603663 podman[98130]: 2026-01-31 08:08:34.262085039 +0000 UTC m=+0.144865234 container create 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:08:34 np0005603663 systemd[1]: Started libpod-conmon-90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce.scope.
Jan 31 03:08:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:34 np0005603663 podman[98130]: 2026-01-31 08:08:34.45424199 +0000 UTC m=+0.337022245 container init 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:08:34 np0005603663 podman[98130]: 2026-01-31 08:08:34.463586917 +0000 UTC m=+0.346367112 container start 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:34 np0005603663 systemd[1]: libpod-90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce.scope: Deactivated successfully.
Jan 31 03:08:34 np0005603663 ecstatic_poincare[98147]: 167 167
Jan 31 03:08:34 np0005603663 podman[98130]: 2026-01-31 08:08:34.520770658 +0000 UTC m=+0.403550853 container attach 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:08:34 np0005603663 podman[98130]: 2026-01-31 08:08:34.522193699 +0000 UTC m=+0.404973924 container died 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:34 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=12.903619766s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 79.987213135s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:34 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=12.903619766s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 79.987213135s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f3577145f452fa2921c1fbdcb51559237a6c2193b993bb2a79df92aadb4670c6-merged.mount: Deactivated successfully.
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 03:08:35 np0005603663 podman[98130]: 2026-01-31 08:08:35.363386777 +0000 UTC m=+1.246166972 container remove 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 systemd[1]: libpod-conmon-90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce.scope: Deactivated successfully.
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 921ae24a-b5aa-4d4d-861f-3e75adb35864 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=43/44 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:35 np0005603663 podman[98172]: 2026-01-31 08:08:35.608645563 +0000 UTC m=+0.121759164 container create 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:08:35 np0005603663 podman[98172]: 2026-01-31 08:08:35.520864369 +0000 UTC m=+0.033978030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:08:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v97: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 236 op/s
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:35 np0005603663 systemd[1]: Started libpod-conmon-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope.
Jan 31 03:08:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:08:35 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:08:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:36 np0005603663 podman[98172]: 2026-01-31 08:08:36.088948525 +0000 UTC m=+0.602062126 container init 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:36 np0005603663 podman[98172]: 2026-01-31 08:08:36.097933061 +0000 UTC m=+0.611046652 container start 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:36 np0005603663 podman[98172]: 2026-01-31 08:08:36.210528112 +0000 UTC m=+0.723641713 container attach 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 03:08:36 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 04386057-19f2-4725-b4c5-3655da5ae531 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:36 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 03:08:36 np0005603663 lvm[98268]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:08:36 np0005603663 lvm[98269]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:08:36 np0005603663 lvm[98269]: VG ceph_vg0 finished
Jan 31 03:08:36 np0005603663 lvm[98268]: VG ceph_vg1 finished
Jan 31 03:08:36 np0005603663 lvm[98271]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:08:36 np0005603663 lvm[98271]: VG ceph_vg2 finished
Jan 31 03:08:36 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=12.787521362s) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 93.562095642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:36 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=12.787521362s) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 93.562095642s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:36 np0005603663 suspicious_gould[98189]: {}
Jan 31 03:08:36 np0005603663 systemd[1]: libpod-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope: Deactivated successfully.
Jan 31 03:08:36 np0005603663 systemd[1]: libpod-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope: Consumed 1.040s CPU time.
Jan 31 03:08:36 np0005603663 podman[98172]: 2026-01-31 08:08:36.893955819 +0000 UTC m=+1.407069410 container died 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:08:37 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34-merged.mount: Deactivated successfully.
Jan 31 03:08:37 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 03:08:37 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=11.259410858s) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 87.504295349s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=11.259410858s) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown pruub 87.504295349s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 podman[98172]: 2026-01-31 08:08:37.384890734 +0000 UTC m=+1.898004335 container remove 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:37 np0005603663 systemd[1]: libpod-conmon-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope: Deactivated successfully.
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:08:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v99: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev e0f04fc9-3a4a-419c-885d-453067ed64a6 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=45/46 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:38 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 6 completed events
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:38 np0005603663 ceph-mgr[75519]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Jan 31 03:08:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 03:08:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 03:08:38 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 31 03:08:38 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 03:08:38 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev b90c2479-d359-4e6d-8ffc-5b2641c84c5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:39 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=11.337800980s) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active pruub 83.031776428s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:39 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=11.337800980s) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown pruub 83.031776428s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v102: 135 pgs: 1 peering, 62 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 03:08:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=48 pruub=13.171987534s) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active pruub 92.220489502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev b11b1caf-b01e-4a34-8316-28039473002a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=48 pruub=13.171987534s) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown pruub 92.220489502s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=47/48 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 03:08:40 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 015710af-b72f-4b5c-b3f1-4f3d67ad9f4e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:40 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.13( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.12( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.17( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.15( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.14( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.10( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.b( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.a( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.9( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.8( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.d( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.6( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.4( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.f( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.e( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.c( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.7( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.2( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.3( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1d( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1e( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.18( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.19( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1c( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1b( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1a( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.17( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.12( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.14( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=48/49 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.d( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.7( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.10( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1d( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.19( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v105: 181 pgs: 2 peering, 77 unknown, 102 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 03:08:41 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 901d1da8-4684-4daf-bb55-fd322d0e69d9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 31 03:08:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=11.599143982s) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 34'5 mlcod 34'5 active pruub 92.479797363s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=35/36 n=210 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50 pruub=13.248332024s) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 41'482 mlcod 41'482 active pruub 94.129051208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[8.0( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=11.599143982s) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 34'5 mlcod 0'0 unknown pruub 92.479797363s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[9.0( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50 pruub=13.248332024s) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 41'482 mlcod 0'0 unknown pruub 94.129051208s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de600 space 0x55d781f92540 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828dfb00 space 0x55d782243a40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966080 space 0x55d78213f440 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5f200 space 0x55d781f2ae40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782970100 space 0x55d781f92b40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de800 space 0x55d781f93740 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782958b80 space 0x55d781b97a40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966500 space 0x55d78207ae40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966900 space 0x55d781f2ba40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966c00 space 0x55d781f2b140 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782975f00 space 0x55d7820af740 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78295e680 space 0x55d782097d40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828df000 space 0x55d7820af440 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966a00 space 0x55d78207ba40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967480 space 0x55d7820ec240 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828dea00 space 0x55d782094840 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894e80 space 0x55d781f6a540 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894180 space 0x55d7820dd140 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974b80 space 0x55d7820d7440 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966f80 space 0x55d781ba7740 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967680 space 0x55d7820ecb40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974f80 space 0x55d7820d6240 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782970800 space 0x55d781ba6240 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782975d00 space 0x55d781bc3740 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de900 space 0x55d781f29140 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78297ff00 space 0x55d781bc2540 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0c80 space 0x55d781f98540 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5f500 space 0x55d782152240 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894d00 space 0x55d7820ddd40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0000 space 0x55d781bc3d40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0280 space 0x55d781f7fa40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974d80 space 0x55d7820d6b40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0e00 space 0x55d782107740 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7827e7c00 space 0x55d782236840 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78290fa80 space 0x55d781ba6b40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974b00 space 0x55d782152b40 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781b0b200 space 0x55d781bc2e40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78295ff80 space 0x55d781f2cb40 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0700 space 0x55d782a56540 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5f600 space 0x55d781f93140 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966100 space 0x55d781f2c240 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967780 space 0x55d7820ae840 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0680 space 0x55d7820dd440 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782970380 space 0x55d782097140 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5ef80 space 0x55d7820ec840 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782916b80 space 0x55d781f98e40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7827e7300 space 0x55d782cde840 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782959580 space 0x55d781b97140 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894d80 space 0x55d782106540 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974200 space 0x55d7820dc840 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de300 space 0x55d781f6ae40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974400 space 0x55d7820af140 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5ed00 space 0x55d782095d40 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0a00 space 0x55d782094540 0x0~9a clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966e80 space 0x55d781f2a840 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974980 space 0x55d7820d7d40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782927a00 space 0x55d781b96e40 0x0~6e clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7827e6880 space 0x55d781f2dd40 0x0~98 clean)
Jan 31 03:08:41 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967200 space 0x55d7820ed440 0x0~6e clean)
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 48 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=9.509422302s) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 37'38 mlcod 37'38 active pruub 95.593406677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 48 pg[6.0( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=9.509422302s) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 37'38 mlcod 0'0 unknown pruub 95.593406677s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 03:08:42 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] update: starting ev 06d4d9a4-3842-4b34-8d16-64fb69abde60 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.14( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.15( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.15( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.14( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.16( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.17( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.17( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.16( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.10( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 1f2c8d8f-218f-4e97-9265-e014252ce84a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 1f2c8d8f-218f-4e97-9265-e014252ce84a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev bea3075c-4cd0-4ea7-a2f9-012dea3b16ab (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event bea3075c-4cd0-4ea7-a2f9-012dea3b16ab (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 921ae24a-b5aa-4d4d-861f-3e75adb35864 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 921ae24a-b5aa-4d4d-861f-3e75adb35864 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 04386057-19f2-4725-b4c5-3655da5ae531 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 04386057-19f2-4725-b4c5-3655da5ae531 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.11( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.10( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.11( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev e0f04fc9-3a4a-419c-885d-453067ed64a6 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event e0f04fc9-3a4a-419c-885d-453067ed64a6 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.12( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev b90c2479-d359-4e6d-8ffc-5b2641c84c5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event b90c2479-d359-4e6d-8ffc-5b2641c84c5a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev b11b1caf-b01e-4a34-8316-28039473002a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event b11b1caf-b01e-4a34-8316-28039473002a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.13( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.12( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.13( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 015710af-b72f-4b5c-b3f1-4f3d67ad9f4e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 015710af-b72f-4b5c-b3f1-4f3d67ad9f4e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 901d1da8-4684-4daf-bb55-fd322d0e69d9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 901d1da8-4684-4daf-bb55-fd322d0e69d9 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] complete: finished ev 06d4d9a4-3842-4b34-8d16-64fb69abde60 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 06d4d9a4-3842-4b34-8d16-64fb69abde60 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.8( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.9( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.3( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.2( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.9( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.8( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.2( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.3( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.7( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.6( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.7( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.5( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.4( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.4( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.5( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.19( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.18( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.18( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.19( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.16( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.14( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.17( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.10( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.13( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.12( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 37'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.8( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 34'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 41'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.3( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.2( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.a( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.7( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.5( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.4( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1a( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.19( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.5( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:43 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 16 completed events
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v108: 243 pgs: 2 peering, 139 unknown, 102 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 03:08:44 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 31 03:08:44 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 31 03:08:44 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 03:08:44 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 03:08:45 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52 pruub=13.963813782s) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active pruub 98.159751892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=37/38 n=9 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=52 pruub=11.955750465s) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 41'17 mlcod 41'17 active pruub 89.636589050s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=52 pruub=11.955750465s) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 unknown pruub 89.636589050s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:45 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52 pruub=13.963813782s) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown pruub 98.159751892s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:45 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 31 03:08:45 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 31 03:08:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 03:08:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 03:08:45 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.c( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 31 03:08:46 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 31 03:08:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 31 03:08:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 31 03:08:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:47 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 03:08:47 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 03:08:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:48 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 03:08:48 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 03:08:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:50 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 03:08:50 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 03:08:50 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 31 03:08:50 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 31 03:08:51 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 31 03:08:51 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 31 03:08:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:08:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.366329193s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.747566223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403979301s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785240173s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546413422s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927688599s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546381950s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927696228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.366270065s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.747566223s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546366692s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927688599s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403889656s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785240173s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546327591s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927696228s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403687477s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785438538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403489113s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785255432s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545950890s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927726746s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403654099s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785438538s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545910835s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927726746s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403573036s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785461426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403446198s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785255432s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403544426s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785461426s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403316498s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785453796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403273582s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785453796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545407295s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927742004s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403190613s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785583496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545362473s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927742004s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403147697s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785568237s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403170586s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785583496s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403113365s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785568237s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403079033s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785713196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403055191s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785713196s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551298141s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.933990479s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.554909706s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.937629700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551251411s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.933990479s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.554866791s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.937629700s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551415443s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.934501648s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402625084s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785736084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551385880s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934501648s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402586937s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785736084s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402750969s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.786003113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403772354s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.787002563s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402713776s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.786003113s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551176071s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.934494019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403694153s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.787002563s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551154137s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934494019s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402445793s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785827637s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402412415s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785827637s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402297974s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785804749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402265549s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785812378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402276993s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785804749s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402242661s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785812378s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402084351s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785820007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402440071s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.786193848s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402047157s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785820007s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402405739s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.786193848s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403245926s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.787071228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403223991s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.787071228s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.401849747s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785881042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.401474953s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785881042s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.401257515s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785247803s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.400353432s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785247803s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.811231613s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653053284s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567105293s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.408935547s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.811202049s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653053284s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566882133s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.408943176s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.791992188s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.634162903s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.791945457s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.634162903s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566702843s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.409156799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143866539s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986343384s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566665649s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.409156799s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143800735s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986335754s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143803596s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986343384s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143773079s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986335754s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567013741s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.408935547s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143560410s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986312866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566806793s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.408943176s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.572649956s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415519714s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143499374s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986412048s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143529892s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986312866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.572596550s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415519714s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143474579s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986412048s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143161774s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986320496s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809897423s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653068542s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809860229s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653068542s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.810057640s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653305054s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143073082s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986320496s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.810032845s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653305054s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809679985s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653076172s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809659958s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653076172s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142872810s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986312866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142829895s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986312866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568995476s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.412544250s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568959236s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.412567139s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568970680s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.412544250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809531212s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653137207s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568924904s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.412567139s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809463501s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653137207s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142394066s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986145020s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809522629s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653297424s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142354012s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986145020s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809453964s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653297424s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141651154s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985633850s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568623543s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.412681580s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141567230s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985633850s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.569154739s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413276672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809201241s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653343201s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568585396s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.412681580s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809109688s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653343201s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141268730s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985618591s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141236305s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985618591s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568981171s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413444519s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808795929s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653289795s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568946838s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413444519s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808708191s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653289795s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140923500s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985603333s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140869141s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985603333s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808568954s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653335571s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568681717s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413459778s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808535576s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653335571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568647385s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413459778s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568542480s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413581848s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568243980s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413276672s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568473816s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413581848s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140815735s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985946655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140743256s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985946655s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808373451s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653633118s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808355331s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653633118s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141191483s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986480713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567784309s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.413619995s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140518188s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986480713s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567600250s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.413619995s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.807469368s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653564453s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.807430267s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653564453s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.139079094s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985427856s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568891525s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415252686s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.139038086s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985427856s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568850517s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415252686s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138841629s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985397339s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138813019s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985397339s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806970596s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653594971s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.1e( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806947708s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653594971s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138633728s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985435486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138595581s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985435486s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806786537s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653732300s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138245583s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985260010s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138203621s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985260010s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806687355s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653732300s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567918777s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.415283203s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806267738s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653656006s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567814827s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.415267944s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137924194s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985374451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567847252s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.415283203s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137880325s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985374451s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806214333s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653656006s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567748070s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.415267944s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805973053s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653640747s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567916870s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415634155s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805935860s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653678894s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805913925s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653640747s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567890167s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415634155s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805903435s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653678894s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567959785s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.416152954s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137019157s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985260010s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137157440s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985412598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567924500s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.416152954s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567237854s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415512085s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136970520s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985260010s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137116432s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985412598s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567194939s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415512085s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137442589s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985946655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567556381s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.416091919s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566939354s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.415596008s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567514420s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.416091919s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136508942s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985221863s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566890717s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.415596008s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136459351s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985221863s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137128830s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986152649s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566961288s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415992737s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566926956s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415992737s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137091637s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986152649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136816025s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985946655s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804634094s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653869629s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566829681s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.416191101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804590225s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653869629s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566802979s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.416191101s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804243088s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653762817s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804201126s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653762817s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.135580063s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985176086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804260254s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653862000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.135535240s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985176086s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804222107s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653862000s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.7( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.4( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.8( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.9( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.d( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.e( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.1( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.15( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.16( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.17( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.1a( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.19( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.6( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.b( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.1( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.2( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.f( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.11( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.10( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.13( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.12( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.14( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512075424s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.927856445s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.477354050s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893218994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463869095s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879730225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512034416s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.927856445s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.477331161s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893218994s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463832855s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879730225s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475920677s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.891906738s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475829124s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.891906738s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463495255s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879730225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463477135s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879730225s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475553513s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.891914368s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475517273s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.891914368s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334954262s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751403809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334927559s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751403809s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334781647s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751213074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512706757s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929229736s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512685776s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929229736s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334666252s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751213074s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.476438522s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.893142700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512269974s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929046631s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462862015s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879676819s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512231827s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929046631s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462832451s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879676819s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.476285934s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.893142700s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.313980103s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.730964661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.313960075s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.730964661s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462627411s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879745483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.476002693s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893150330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462597847s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879745483s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475985527s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893150330s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334197998s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751396179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475849152s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.893211365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511613846s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929054260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333983421s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751396179s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511595726s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929054260s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475621223s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893196106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475601196s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893196106s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511446953s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929191589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475505829s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893302917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475494385s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893302917s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511421204s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929191589s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483572960s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901481628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483560562s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901481628s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461873055s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879890442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461859703s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879890442s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333656311s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751785278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511238098s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929397583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333625793s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751785278s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510953903s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929138184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461362839s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879600525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461347580s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879600525s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.474966049s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.893211365s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334375381s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752700806s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334362030s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752700806s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483026505s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.901390076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483014107s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.901390076s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.474869728s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.893302917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461119652s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879592896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461109161s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879592896s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.474858284s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.893302917s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.517030716s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935539246s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.517004013s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935539246s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334179878s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752746582s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334168434s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752746582s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482809067s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.901397705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482793808s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.901397705s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460900307s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879600525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460887909s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879600525s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334342003s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753082275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334328651s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753082275s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482596397s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.901397705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482586861s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.901397705s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482557297s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901405334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510538101s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929397583s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482544899s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901405334s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510442734s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929313660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510410309s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929313660s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510427475s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929374695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510416985s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929374695s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333257675s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752258301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333246231s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752258301s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516615868s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935722351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460489273s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879608154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516606331s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935722351s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482373238s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901496887s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460473061s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879608154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482350349s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333232880s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752441406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333211899s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752441406s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482227325s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901504517s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482213974s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901504517s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460116386s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879432678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333134651s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752471924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333121300s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752471924s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460096359s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879432678s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516116142s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935531616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516103745s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935531616s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.459880829s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879325867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.459869385s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879325867s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333497047s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753059387s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.515996933s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935562134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.515967369s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935562134s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333464622s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753059387s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.509437561s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929138184s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.14( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.10( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.c( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.e( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.15( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.11( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.12( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.d( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.336371422s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879249573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.359142303s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902038574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.359107971s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902038574s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.336319923s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879249573s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358894348s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902099609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358872414s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.392084122s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935607910s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.392064095s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935607910s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335464478s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879196167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335405350s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879165649s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335379601s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879165649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335411072s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879196167s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358260155s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902099609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358234406s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391707420s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935768127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391681671s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935768127s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358014107s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902267456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357990265s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902267456s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391457558s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935882568s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391435623s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935882568s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334536552s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879203796s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334514618s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879203796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.f( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.208025932s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752876282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.208004951s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752876282s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207977295s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752876282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207951546s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752876282s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357210159s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902099609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334060669s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879150391s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357098579s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334037781s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879150391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357048988s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902259827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357003212s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902267456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356979370s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902267456s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390460968s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935829163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.333687782s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879142761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390439034s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935829163s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.333665848s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879142761s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357176781s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902030945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356482506s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902030945s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207416534s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753074646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207395554s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753074646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.9( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207137108s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752899170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207106590s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752899170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356471062s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902343750s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356512070s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 41'483 active pruub 105.902404785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356439590s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902343750s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356477737s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 105.902404785s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390192986s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936172485s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390171051s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936172485s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357024193s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902259827s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356262207s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902442932s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390110970s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936317444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.332658768s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.878890991s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390088081s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936317444s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356236458s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902442932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.332636833s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.878890991s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206583023s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752952576s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356039047s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389750481s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936210632s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356015205s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389726639s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936210632s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206468582s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752952576s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389445305s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936195374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389420509s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936195374s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355772972s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902580261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206260681s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753089905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355648041s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355749130s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902580261s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206235886s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753089905s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355624199s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389324188s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936279297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355732918s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902465820s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389301300s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936279297s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355465889s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902465820s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355365753s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902557373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.331678391s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.878875732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355341911s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902557373s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.331655502s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.878875732s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355225563s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902565002s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355195045s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902565002s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205713272s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753135681s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205686569s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753135681s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388819695s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936332703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388796806s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936332703s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355127335s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902832031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388573647s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936309814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355089188s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902839661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355102539s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902832031s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388548851s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936309814s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355070114s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902839661s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.329282761s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.877197266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.354876518s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902847290s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.329265594s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.877197266s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205083847s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753089905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205039978s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753074646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205048561s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753089905s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205020905s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753074646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.354350090s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902847290s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.b( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.2( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.6( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.4( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.18( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.1a( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.1b( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.1d( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.1f( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:52 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.1c( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-mgr[75519]: [progress INFO root] Completed event 4e7c68ba-2212-45d3-b6e9-ef9409eb8f49 (Global Recovery Event) in 15 seconds
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 03:08:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.18( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.1e( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.1a( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1b( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.15( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.11( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.3( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.1d( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.8( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1a( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.c( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.12( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.7( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.d( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.8( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.1( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=54/55 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.e( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.2( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.5( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.9( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.5( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.b( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.2( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.e( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.8( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=54/55 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.a( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.e( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.a( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.11( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.15( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.18( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.11( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1a( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1b( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1c( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1f( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.13( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.16( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1e( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.1c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.11( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.11( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.1c( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1c( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.18( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.17( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.14( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.15( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.16( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.8( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.b( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.2( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.e( v 53'19 lc 38'4 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.1f( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.d( v 53'19 lc 38'5 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.5( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.f( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.2( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.13( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.3( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.1c( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.1d( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.7( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.15( v 53'19 lc 38'3 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.1e( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.18( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.19( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.10( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.9( v 53'19 lc 38'8 (0'0,53'19] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.1b( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.1f( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.f( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.13( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.15( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.11( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.12( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.8( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.9( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.16( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.d( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.f( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 37'21 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.3( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 37'9 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.4( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.c( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.4( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.1( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.7( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.9( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.5( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 lc 37'1 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.c( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.9( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.4( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.18( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.9( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.6( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.1( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.1b( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.5( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.a( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.1a( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.14( v 53'19 lc 38'7 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.4( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.1d( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.2( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.6( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.3( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.6( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.3( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.a( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.f( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.17( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.13( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.19( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.18( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.d( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.12( v 53'19 lc 41'17 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.6( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.f( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.d( v 39'39 lc 37'10 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.7( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.12( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.14( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.10( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.9( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.15( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.12( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.1f( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:53 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.1b( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 03:08:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 03:08:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 03:08:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 03:08:54 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 03:08:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 03:08:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.930760384s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.934173584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.930695534s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934173584s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.923440933s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.927833557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.923384666s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927833557s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929800034s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.934494019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929306984s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.934066772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929759026s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934494019s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929266930s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934066772s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.6( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.2( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.e( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=53'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 31 03:08:55 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 31 03:08:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 2 active+recovery_wait, 16 active+recovery_wait+remapped, 4 peering, 3 active+recovery_wait+degraded, 2 active+recovering, 278 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6/249 objects degraded (2.410%); 103/249 objects misplaced (41.365%); 87 B/s, 1 objects/s recovering
Jan 31 03:08:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 03:08:56 np0005603663 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 6/249 objects degraded (2.410%), 3 pgs degraded (PG_DEGRADED)
Jan 31 03:08:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 03:08:56 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 03:08:56 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 03:08:56 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 03:08:56 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 03:08:56 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 03:08:56 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=56/57 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:56 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=56/57 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:56 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.e( v 39'39 lc 37'19 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:56 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 03:08:57 np0005603663 ceph-mon[75227]: Health check failed: Degraded data redundancy: 6/249 objects degraded (2.410%), 3 pgs degraded (PG_DEGRADED)
Jan 31 03:08:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 2 active+recovery_wait, 16 active+recovery_wait+remapped, 4 peering, 3 active+recovery_wait+degraded, 2 active+recovering, 278 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6/249 objects degraded (2.410%); 103/249 objects misplaced (41.365%); 98 B/s, 2 objects/s recovering
Jan 31 03:08:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 03:08:57 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 03:08:58 np0005603663 ceph-mgr[75519]: [progress INFO root] Writing back 17 completed events
Jan 31 03:08:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 03:08:58 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58 pruub=12.939789772s) [0] async=[0] r=-1 lpr=58 pi=[50,58)/1 crt=41'483 lcod 0'0 active pruub 110.058883667s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:58 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58 pruub=12.939711571s) [0] r=-1 lpr=58 pi=[50,58)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.058883667s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:58 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:58 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:08:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 03:08:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 03:08:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 03:08:59 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.936095238s) [0] async=[0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 active pruub 110.058906555s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:59 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.936017036s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.058906555s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:59 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.941884041s) [0] async=[0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 active pruub 110.067108154s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:59 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.941802979s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067108154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:08:59 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:59 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:59 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:08:59 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:08:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:08:59 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:08:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 2 active+recovery_wait, 13 active+recovery_wait+remapped, 6 peering, 2 active+recovery_wait+degraded, 1 active+recovering, 281 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 87/249 objects misplaced (34.940%); 193 B/s, 5 objects/s recovering
Jan 31 03:09:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 03:09:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 03:09:00 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 03:09:00 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:00 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:00 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:00 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.214437485s) [0] async=[0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 active pruub 110.067276001s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:00 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.214324951s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067276001s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:00 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.212243080s) [0] async=[0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 active pruub 110.067161560s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:00 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.212137222s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067161560s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:00 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:01 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 31 03:09:01 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 31 03:09:01 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:01 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=59/60 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 03:09:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 62/249 objects misplaced (24.900%); 433 B/s, 1 keys/s, 9 objects/s recovering
Jan 31 03:09:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 03:09:01 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 03:09:02 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61 pruub=9.120767593s) [0] async=[0] r=-1 lpr=61 pi=[50,61)/1 crt=41'483 lcod 0'0 active pruub 110.067260742s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:02 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61 pruub=9.120581627s) [0] r=-1 lpr=61 pi=[50,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067260742s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:02 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:02 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:02 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 2 pgs degraded)
Jan 31 03:09:02 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 03:09:02 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:02 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=60/61 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 03:09:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 03:09:03 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 03:09:03 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.935668945s) [0] async=[0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 active pruub 118.067489624s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:03 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.935050011s) [0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067489624s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:03 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.933579445s) [0] async=[0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 active pruub 118.067451477s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:03 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.933343887s) [0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067451477s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:03 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:03 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:03 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:03 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 62/249 objects misplaced (24.900%); 522 B/s, 2 keys/s, 10 objects/s recovering
Jan 31 03:09:03 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=61/62 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:03 np0005603663 ceph-mon[75227]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 2 pgs degraded)
Jan 31 03:09:03 np0005603663 ceph-mon[75227]: Cluster is now healthy
Jan 31 03:09:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 03:09:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 03:09:04 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 03:09:04 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63 pruub=14.899969101s) [0] async=[0] r=-1 lpr=63 pi=[50,63)/1 crt=41'483 lcod 0'0 active pruub 118.067497253s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:04 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63 pruub=14.899835587s) [0] r=-1 lpr=63 pi=[50,63)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067497253s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:04 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:04 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:04 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:04 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 03:09:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 03:09:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 03:09:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 03:09:05 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703495026s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067504883s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703622818s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067710876s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703415871s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067504883s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703359604s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067710876s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702342987s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067642212s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702383995s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067672729s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702288628s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067642212s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702269554s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067672729s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.700806618s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=53'484 lcod 53'484 active pruub 118.067619324s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.700736046s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 118.067619324s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:05 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/249 objects misplaced (3.614%); 552 B/s, 13 objects/s recovering
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 03:09:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 03:09:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 03:09:06 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:06 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.617675781s) [0] async=[0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 active pruub 118.067848206s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:06 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.617586136s) [0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067848206s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=64/65 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=56'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:06 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.613872528s) [0] async=[0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 active pruub 118.067260742s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:06 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.613566399s) [0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067260742s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:06 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 03:09:06 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 03:09:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 03:09:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 03:09:07 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 03:09:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/249 objects misplaced (3.614%); 554 B/s, 13 objects/s recovering
Jan 31 03:09:07 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 66 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:07 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 66 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 489 B/s, 11 objects/s recovering
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 03:09:09 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67 pruub=15.362014771s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.710304260s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67 pruub=15.361968994s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.710304260s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67 pruub=15.361686707s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.710273743s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67 pruub=15.361618042s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.710273743s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359692574s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.709526062s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359621048s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.709526062s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359480858s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.709434509s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:10 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359438896s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.709434509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 03:09:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 03:09:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 03:09:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 03:09:10 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.7( v 39'39 lc 37'21 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=67/68 n=2 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:10 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.f( v 39'39 lc 37'1 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 1 objects/s recovering
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 03:09:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 03:09:12 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 03:09:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 03:09:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 03:09:12 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.201407433s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 active pruub 126.928176880s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:12 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.201289177s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.928176880s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:12 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 69 pg[6.4( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:12 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.207127571s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 active pruub 126.934516907s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:12 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.206985474s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.934516907s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:12 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 69 pg[6.c( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 03:09:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 objects/s recovering
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 03:09:13 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 03:09:13 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 03:09:14 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 03:09:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 03:09:14 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 03:09:14 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 70 pg[6.4( v 39'39 lc 37'11 (0'0,39'39] local-lis/les=69/70 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:14 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 70 pg[6.c( v 39'39 lc 37'17 (0'0,39'39] local-lis/les=69/70 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 03:09:15 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 03:09:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 131 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 03:09:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 03:09:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 03:09:16 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 31 03:09:16 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 31 03:09:16 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71 pruub=9.507270813s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 active pruub 124.724533081s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:16 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71 pruub=9.507144928s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 unknown NOTIFY pruub 124.724533081s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:16 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71 pruub=9.491441727s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 active pruub 124.709686279s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:16 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71 pruub=9.491330147s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 unknown NOTIFY pruub 124.709686279s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:16 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 71 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:16 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 71 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:16 np0005603663 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded (PG_DEGRADED)
Jan 31 03:09:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 03:09:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 03:09:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 03:09:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 03:09:16 np0005603663 python3[98338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:09:16 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 72 pg[6.5( v 39'39 lc 37'9 (0'0,39'39] local-lis/les=71/72 n=2 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:16 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 03:09:16 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 72 pg[6.d( v 39'39 lc 37'10 (0'0,39'39] local-lis/les=71/72 n=1 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:16 np0005603663 podman[98339]: 2026-01-31 08:09:16.847145906 +0000 UTC m=+0.027272280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:09:17 np0005603663 podman[98339]: 2026-01-31 08:09:17.113435384 +0000 UTC m=+0.293561758 container create 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:09:17 np0005603663 systemd[76601]: Starting Mark boot as successful...
Jan 31 03:09:17 np0005603663 systemd[76601]: Finished Mark boot as successful.
Jan 31 03:09:17 np0005603663 systemd[1]: Started libpod-conmon-77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d.scope.
Jan 31 03:09:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc32ff7283532f7aeffbd42bc1d97235c7072d17f9a22b67cf43905b93d1852e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc32ff7283532f7aeffbd42bc1d97235c7072d17f9a22b67cf43905b93d1852e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:17 np0005603663 podman[98339]: 2026-01-31 08:09:17.36080374 +0000 UTC m=+0.540930154 container init 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:09:17 np0005603663 podman[98339]: 2026-01-31 08:09:17.365801429 +0000 UTC m=+0.545927793 container start 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:17 np0005603663 podman[98339]: 2026-01-31 08:09:17.415730801 +0000 UTC m=+0.595857165 container attach 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 113 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 03:09:18 np0005603663 ceph-mon[75227]: Health check failed: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded (PG_DEGRADED)
Jan 31 03:09:18 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 31 03:09:18 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 31 03:09:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 7 op/s; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 134 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 03:09:20 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 03:09:20 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 03:09:20 np0005603663 compassionate_lamport[98356]: could not fetch user info: no user info saved
Jan 31 03:09:20 np0005603663 systemd[1]: libpod-77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d.scope: Deactivated successfully.
Jan 31 03:09:20 np0005603663 podman[98339]: 2026-01-31 08:09:20.586918443 +0000 UTC m=+3.767044837 container died 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:09:21 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bc32ff7283532f7aeffbd42bc1d97235c7072d17f9a22b67cf43905b93d1852e-merged.mount: Deactivated successfully.
Jan 31 03:09:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 18 op/s; 335 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 03:09:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 03:09:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 03:09:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 03:09:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 03:09:21 np0005603663 podman[98339]: 2026-01-31 08:09:21.738102569 +0000 UTC m=+4.918228933 container remove 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 03:09:21 np0005603663 systemd[1]: libpod-conmon-77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d.scope: Deactivated successfully.
Jan 31 03:09:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 03:09:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded)
Jan 31 03:09:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 03:09:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 03:09:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 03:09:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 03:09:22 np0005603663 python3[98478]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:09:22 np0005603663 podman[98479]: 2026-01-31 08:09:22.138834999 +0000 UTC m=+0.018815270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 03:09:22 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 31 03:09:22 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 31 03:09:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 03:09:22 np0005603663 podman[98479]: 2026-01-31 08:09:22.900141636 +0000 UTC m=+0.780121927 container create e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:09:23 np0005603663 systemd[1]: Started libpod-conmon-e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2.scope.
Jan 31 03:09:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99504d6cae02201ca5482d6e34f003d25ab28fb7e7eb2531f7629e8e7a4beae3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99504d6cae02201ca5482d6e34f003d25ab28fb7e7eb2531f7629e8e7a4beae3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded)
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: Cluster is now healthy
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 03:09:23 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 31 03:09:23 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 31 03:09:23 np0005603663 podman[98479]: 2026-01-31 08:09:23.602897275 +0000 UTC m=+1.482877546 container init e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:09:23 np0005603663 podman[98479]: 2026-01-31 08:09:23.610591174 +0000 UTC m=+1.490571425 container start e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 16 op/s; 222 B/s, 0 objects/s recovering
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 03:09:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.146315575s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 active pruub 137.893890381s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.146218300s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 137.893890381s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:23 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154381752s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=72'485 lcod 72'485 active pruub 137.902816772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154334068s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=72'485 lcod 72'485 unknown NOTIFY pruub 137.902816772s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154219627s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 active pruub 137.902801514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154117584s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 137.902801514s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.153962135s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 41'483 active pruub 137.903045654s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:23 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.153791428s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 137.903045654s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:23 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:23 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:23 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:23 np0005603663 podman[98479]: 2026-01-31 08:09:23.838996856 +0000 UTC m=+1.718977147 container attach e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=74 pruub=14.993849754s) [2] r=-1 lpr=74 pi=[65,74)/1 crt=72'484 lcod 72'484 active pruub 143.717453003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=74 pruub=14.993807793s) [2] r=-1 lpr=74 pi=[65,74)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 143.717453003s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=74 pruub=13.763647079s) [2] r=-1 lpr=74 pi=[64,74)/1 crt=41'483 lcod 41'483 active pruub 142.487899780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=74 pruub=13.763580322s) [2] r=-1 lpr=74 pi=[64,74)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 142.487899780s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484574318s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 lcod 41'483 active pruub 140.209106445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484548569s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 140.209106445s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484263420s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 active pruub 140.209136963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484206200s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 unknown NOTIFY pruub 140.209136963s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=74) [2] r=0 lpr=74 pi=[65,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=74) [2] r=0 lpr=74 pi=[64,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74) [2] r=0 lpr=74 pi=[62,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74) [2] r=0 lpr=74 pi=[62,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 03:09:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=72'485 lcod 72'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=72'485 lcod 72'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:24 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:24 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 31 03:09:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 03:09:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 03:09:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 19 op/s; 222 B/s, 0 objects/s recovering
Jan 31 03:09:25 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[64,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[64,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[65,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[65,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:25 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=0 lpr=75 pi=[65,75)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=0 lpr=75 pi=[65,75)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=0 lpr=75 pi=[64,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:26 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=0 lpr=75 pi=[64,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 03:09:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 03:09:26 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=74/75 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=72'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:26 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=74/75 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:26 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=74/75 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=72'486 lcod 72'485 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:26 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=74/75 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 03:09:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 03:09:27 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 03:09:27 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76) [2] r=0 lpr=76 pi=[50,76)/1 pct=0'0 crt=72'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:27 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76) [2] r=0 lpr=76 pi=[50,76)/1 crt=72'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:27 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76 pruub=14.690245628s) [2] async=[2] r=-1 lpr=76 pi=[50,76)/1 crt=72'484 lcod 72'484 active pruub 140.986663818s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:27 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76 pruub=14.690156937s) [2] r=-1 lpr=76 pi=[50,76)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 140.986663818s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:27 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=75/76 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[62,75)/1 crt=72'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:27 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=75/76 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[64,75)/1 crt=72'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:27 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=75/76 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[62,75)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:27 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=75/76 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[65,75)/1 crt=72'485 lcod 72'484 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 3 op/s
Jan 31 03:09:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 03:09:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 03:09:28 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 03:09:28 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 pct=0'0 crt=75'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:28 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=75'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:28 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:28 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:28 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442836761s) [2] async=[2] r=-1 lpr=77 pi=[50,77)/1 crt=41'483 lcod 0'0 active pruub 140.986724854s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:28 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442744255s) [2] r=-1 lpr=77 pi=[50,77)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 140.986724854s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:28 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442586899s) [2] async=[2] r=-1 lpr=77 pi=[50,77)/1 crt=75'487 lcod 75'487 active pruub 140.986816406s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:28 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442452431s) [2] r=-1 lpr=77 pi=[50,77)/1 crt=75'487 lcod 75'487 unknown NOTIFY pruub 140.986816406s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:28 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76) [2] r=0 lpr=76 pi=[50,76)/1 crt=75'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 03:09:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 03:09:29 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 03:09:29 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 03:09:29 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78 pruub=14.123814583s) [2] async=[2] r=-1 lpr=78 pi=[62,78)/1 crt=72'484 lcod 72'484 active pruub 147.509994507s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:29 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78 pruub=14.123645782s) [2] r=-1 lpr=78 pi=[62,78)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 147.509994507s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:29 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78) [2] r=0 lpr=78 pi=[62,78)/1 pct=0'0 crt=72'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:29 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78) [2] r=0 lpr=78 pi=[62,78)/1 crt=72'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:29 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 03:09:29 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=77/78 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=75'488 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:29 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=77/78 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 1 active+recovery_wait+remapped, 1 active+remapped, 1 active+recovering+remapped, 4 remapped+peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 5 op/s; 12/250 objects misplaced (4.800%); 38 B/s, 0 objects/s recovering
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]: {
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "user_id": "openstack",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "display_name": "openstack",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "email": "",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "suspended": 0,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "max_buckets": 1000,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "subusers": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "keys": [
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        {
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:            "user": "openstack",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:            "access_key": "4KDH2DMGXG8BAVVQQ56K",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:            "secret_key": "2aeBpeL3MIacX0sq4TmgW6Hei3rNmrWGE6tXmn0q",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:            "active": true,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:            "create_date": "2026-01-31T08:09:30.193014Z"
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        }
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    ],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "swift_keys": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "caps": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "op_mask": "read, write, delete",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "default_placement": "",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "default_storage_class": "",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "placement_tags": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "bucket_quota": {
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "enabled": false,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "check_on_raw": false,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "max_size": -1,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "max_size_kb": 0,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "max_objects": -1
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    },
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "user_quota": {
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "enabled": false,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "check_on_raw": false,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "max_size": -1,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "max_size_kb": 0,
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:        "max_objects": -1
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    },
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "temp_url_keys": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "type": "rgw",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "mfa_ids": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "account_id": "",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "path": "/",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "create_date": "2026-01-31T08:09:30.192463Z",
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "tags": [],
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]:    "group_ids": []
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]: }
Jan 31 03:09:30 np0005603663 gallant_montalcini[98494]: 
Jan 31 03:09:30 np0005603663 systemd[1]: libpod-e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2.scope: Deactivated successfully.
Jan 31 03:09:30 np0005603663 podman[98479]: 2026-01-31 08:09:30.228490127 +0000 UTC m=+8.108470418 container died e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:09:30 np0005603663 systemd[1]: var-lib-containers-storage-overlay-99504d6cae02201ca5482d6e34f003d25ab28fb7e7eb2531f7629e8e7a4beae3-merged.mount: Deactivated successfully.
Jan 31 03:09:30 np0005603663 podman[98479]: 2026-01-31 08:09:30.275690768 +0000 UTC m=+8.155671019 container remove e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:30 np0005603663 systemd[1]: libpod-conmon-e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2.scope: Deactivated successfully.
Jan 31 03:09:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 03:09:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 03:09:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 03:09:30 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79 pruub=11.654903412s) [2] async=[2] r=-1 lpr=79 pi=[50,79)/1 crt=41'483 lcod 0'0 active pruub 140.986663818s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79 pruub=11.654653549s) [2] r=-1 lpr=79 pi=[50,79)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 140.986663818s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79) [2] r=0 lpr=79 pi=[50,79)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79) [2] r=0 lpr=79 pi=[50,79)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79 pruub=13.149453163s) [2] async=[2] r=-1 lpr=79 pi=[64,79)/1 crt=72'484 lcod 72'484 active pruub 147.522628784s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79 pruub=13.149522781s) [2] async=[2] r=-1 lpr=79 pi=[65,79)/1 crt=76'486 lcod 76'486 active pruub 147.522735596s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79 pruub=13.149383545s) [2] r=-1 lpr=79 pi=[64,79)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 147.522628784s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79 pruub=13.149377823s) [2] r=-1 lpr=79 pi=[65,79)/1 crt=76'486 lcod 76'486 unknown NOTIFY pruub 147.522735596s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79) [2] r=0 lpr=79 pi=[64,79)/1 pct=0'0 crt=72'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79) [2] r=0 lpr=79 pi=[64,79)/1 crt=72'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79 pruub=13.148480415s) [2] async=[2] r=-1 lpr=79 pi=[62,79)/1 crt=41'483 active pruub 147.522628784s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79 pruub=13.148399353s) [2] r=-1 lpr=79 pi=[62,79)/1 crt=41'483 unknown NOTIFY pruub 147.522628784s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79) [2] r=0 lpr=79 pi=[65,79)/1 pct=0'0 crt=76'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79) [2] r=0 lpr=79 pi=[65,79)/1 crt=76'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79) [2] r=0 lpr=79 pi=[62,79)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79) [2] r=0 lpr=79 pi=[62,79)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=78/79 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78) [2] r=0 lpr=78 pi=[62,78)/1 crt=76'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:30 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 31 03:09:30 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 03:09:30 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 03:09:31 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 03:09:31 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 03:09:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 03:09:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 03:09:31 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 03:09:31 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79) [2] r=0 lpr=79 pi=[50,79)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:31 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=79/80 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79) [2] r=0 lpr=79 pi=[64,79)/1 crt=78'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:31 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=79/80 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79) [2] r=0 lpr=79 pi=[65,79)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:31 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79) [2] r=0 lpr=79 pi=[62,79)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:09:31
Jan 31 03:09:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:09:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Jan 31 03:09:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 1 active+recovery_wait+remapped, 1 peering, 2 active+remapped, 1 active+recovering+remapped, 300 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 25 op/s; 12/250 objects misplaced (4.800%); 311 B/s, 8 objects/s recovering
Jan 31 03:09:32 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 03:09:32 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:09:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:09:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 active+recovery_wait+remapped, 1 peering, 2 active+remapped, 1 active+recovering+remapped, 300 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 19 op/s; 12/250 objects misplaced (4.800%); 235 B/s, 6 objects/s recovering
Jan 31 03:09:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 323 B/s wr, 20 op/s; 315 B/s, 7 objects/s recovering
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 03:09:35 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 81 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81 pruub=11.125459671s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=39'39 lcod 0'0 active pruub 150.934844971s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:35 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 81 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81 pruub=11.125280380s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 150.934844971s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:35 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 03:09:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 81 pg[6.8( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[48,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:35 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 31 03:09:35 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 31 03:09:36 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450086594s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=41'483 lcod 0'0 active pruub 145.902664185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:36 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450015068s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 145.902664185s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:36 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450208664s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=76'486 lcod 76'486 active pruub 145.903137207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:36 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450128555s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=76'486 lcod 76'486 unknown NOTIFY pruub 145.903137207s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 03:09:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 03:09:36 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:36 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 03:09:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 03:09:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 03:09:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:37 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:37 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=81/82 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[48,81)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:37 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 03:09:37 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 03:09:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 325 B/s wr, 3 op/s; 118 B/s, 1 objects/s recovering
Jan 31 03:09:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 03:09:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 03:09:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 03:09:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 03:09:38 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.604473114s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=39'39 lcod 0'0 active pruub 148.712814331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:38 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.604420662s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 148.712814331s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:38 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 83 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83) [0] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 31 03:09:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:09:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:09:38 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 31 03:09:38 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 03:09:39 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=82/83 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[50,82)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:39 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=82/83 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[50,82)/1 crt=78'487 lcod 76'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 03:09:39 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 84 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=83/84 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83) [0] r=0 lpr=83 pi=[54,83)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.169443908 +0000 UTC m=+0.020489989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.298830141 +0000 UTC m=+0.149876212 container create 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:09:39 np0005603663 systemd[1]: Started libpod-conmon-33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee.scope.
Jan 31 03:09:39 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.575165487 +0000 UTC m=+0.426211568 container init 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.583399711 +0000 UTC m=+0.434445772 container start 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:09:39 np0005603663 naughty_wu[98751]: 167 167
Jan 31 03:09:39 np0005603663 systemd[1]: libpod-33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee.scope: Deactivated successfully.
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.618816193 +0000 UTC m=+0.469862264 container attach 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.61906995 +0000 UTC m=+0.470116011 container died 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 03:09:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 03:09:39 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8ef390ce3237eb08a76ef2d6a5437afa0f196c3b716b7612fdcd94fa581f1297-merged.mount: Deactivated successfully.
Jan 31 03:09:39 np0005603663 podman[98735]: 2026-01-31 08:09:39.992952603 +0000 UTC m=+0.843998674 container remove 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:09:39 np0005603663 systemd[1]: libpod-conmon-33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee.scope: Deactivated successfully.
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 03:09:40 np0005603663 podman[98777]: 2026-01-31 08:09:40.15920095 +0000 UTC m=+0.084488320 container create cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:40 np0005603663 podman[98777]: 2026-01-31 08:09:40.098652442 +0000 UTC m=+0.023939822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:09:40 np0005603663 systemd[1]: Started libpod-conmon-cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573.scope.
Jan 31 03:09:40 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:40 np0005603663 podman[98777]: 2026-01-31 08:09:40.430719703 +0000 UTC m=+0.356007053 container init cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:09:40 np0005603663 podman[98777]: 2026-01-31 08:09:40.438184135 +0000 UTC m=+0.363471505 container start cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 03:09:40 np0005603663 podman[98777]: 2026-01-31 08:09:40.471873965 +0000 UTC m=+0.397161295 container attach cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 03:09:40 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=82/83 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.465391159s) [2] async=[2] r=-1 lpr=85 pi=[50,85)/1 crt=78'487 lcod 76'486 active pruub 154.003845215s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=82/83 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.428462029s) [2] async=[2] r=-1 lpr=85 pi=[50,85)/1 crt=41'483 lcod 0'0 active pruub 153.966842651s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=82/83 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.428300858s) [2] r=-1 lpr=85 pi=[50,85)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 153.966842651s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=82/83 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.465301514s) [2] r=-1 lpr=85 pi=[50,85)/1 crt=78'487 lcod 76'486 unknown NOTIFY pruub 154.003845215s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=11.792774200s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=39'39 lcod 0'0 active pruub 151.332031250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:40 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=11.792756081s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 151.332031250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:40 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 85 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85) [0] r=0 lpr=85 pi=[56,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:40 np0005603663 nostalgic_bhaskara[98794]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:09:40 np0005603663 nostalgic_bhaskara[98794]: --> All data devices are unavailable
Jan 31 03:09:40 np0005603663 systemd[1]: libpod-cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573.scope: Deactivated successfully.
Jan 31 03:09:40 np0005603663 podman[98777]: 2026-01-31 08:09:40.841596444 +0000 UTC m=+0.766883804 container died cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 31 03:09:40 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 31 03:09:41 np0005603663 systemd[1]: var-lib-containers-storage-overlay-dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba-merged.mount: Deactivated successfully.
Jan 31 03:09:41 np0005603663 podman[98777]: 2026-01-31 08:09:41.227424952 +0000 UTC m=+1.152712312 container remove cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:09:41 np0005603663 systemd[1]: libpod-conmon-cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573.scope: Deactivated successfully.
Jan 31 03:09:41 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 31 03:09:41 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 03:09:41 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 03:09:41 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 03:09:41 np0005603663 podman[98889]: 2026-01-31 08:09:41.613207888 +0000 UTC m=+0.029204248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:09:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 2 active+remapped, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 03:09:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 03:09:41 np0005603663 podman[98889]: 2026-01-31 08:09:41.860812451 +0000 UTC m=+0.276808791 container create 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:41 np0005603663 systemd[1]: Started libpod-conmon-97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828.scope.
Jan 31 03:09:41 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 03:09:42 np0005603663 podman[98889]: 2026-01-31 08:09:42.404901508 +0000 UTC m=+0.820897928 container init 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:09:42 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 03:09:42 np0005603663 podman[98889]: 2026-01-31 08:09:42.414470952 +0000 UTC m=+0.830467292 container start 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:09:42 np0005603663 fervent_swartz[98905]: 167 167
Jan 31 03:09:42 np0005603663 systemd[1]: libpod-97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828.scope: Deactivated successfully.
Jan 31 03:09:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 86 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85) [0] r=0 lpr=85 pi=[56,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:42 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 86 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=85/86 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:42 np0005603663 podman[98889]: 2026-01-31 08:09:42.484637976 +0000 UTC m=+0.900634396 container attach 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:09:42 np0005603663 podman[98889]: 2026-01-31 08:09:42.48511229 +0000 UTC m=+0.901108670 container died 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:42 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 03:09:42 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 86 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=85/86 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:42 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 03:09:42 np0005603663 systemd[1]: var-lib-containers-storage-overlay-65b61540c9d1bcba4f093c979b9b921c813d1d24f575f14044b87b1cb3cb3c1f-merged.mount: Deactivated successfully.
Jan 31 03:09:42 np0005603663 podman[98889]: 2026-01-31 08:09:42.755641234 +0000 UTC m=+1.171637604 container remove 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:09:42 np0005603663 systemd[1]: libpod-conmon-97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828.scope: Deactivated successfully.
Jan 31 03:09:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 03:09:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 03:09:42 np0005603663 podman[98929]: 2026-01-31 08:09:42.947061528 +0000 UTC m=+0.062661052 container create 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6463876445909616e-06 of space, bias 4.0, pg target 0.001975665173509154 quantized to 16 (current 16)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.45134954743765e-06 of space, bias 1.0, pg target 0.0013354048642312951 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:09:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:09:43 np0005603663 podman[98929]: 2026-01-31 08:09:42.910766201 +0000 UTC m=+0.026365755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:09:43 np0005603663 systemd[1]: Started libpod-conmon-03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28.scope.
Jan 31 03:09:43 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:43 np0005603663 podman[98929]: 2026-01-31 08:09:43.111311396 +0000 UTC m=+0.226910970 container init 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:09:43 np0005603663 podman[98929]: 2026-01-31 08:09:43.116354856 +0000 UTC m=+0.231954380 container start 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:09:43 np0005603663 podman[98929]: 2026-01-31 08:09:43.17811271 +0000 UTC m=+0.293712244 container attach 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]: {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:    "0": [
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:        {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "devices": [
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "/dev/loop3"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            ],
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_name": "ceph_lv0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_size": "21470642176",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "name": "ceph_lv0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "tags": {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cluster_name": "ceph",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.crush_device_class": "",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.encrypted": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.objectstore": "bluestore",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osd_id": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.type": "block",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.vdo": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.with_tpm": "0"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            },
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "type": "block",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "vg_name": "ceph_vg0"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:        }
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:    ],
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:    "1": [
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:        {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "devices": [
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "/dev/loop4"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            ],
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_name": "ceph_lv1",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_size": "21470642176",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "name": "ceph_lv1",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "tags": {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cluster_name": "ceph",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.crush_device_class": "",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.encrypted": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.objectstore": "bluestore",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osd_id": "1",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.type": "block",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.vdo": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.with_tpm": "0"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            },
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "type": "block",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "vg_name": "ceph_vg1"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:        }
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:    ],
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:    "2": [
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:        {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "devices": [
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "/dev/loop5"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            ],
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_name": "ceph_lv2",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_size": "21470642176",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "name": "ceph_lv2",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "tags": {
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.cluster_name": "ceph",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.crush_device_class": "",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.encrypted": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.objectstore": "bluestore",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osd_id": "2",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.type": "block",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.vdo": "0",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:                "ceph.with_tpm": "0"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            },
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "type": "block",
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:            "vg_name": "ceph_vg2"
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:        }
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]:    ]
Jan 31 03:09:43 np0005603663 upbeat_albattani[98946]: }
Jan 31 03:09:43 np0005603663 systemd[1]: libpod-03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28.scope: Deactivated successfully.
Jan 31 03:09:43 np0005603663 podman[98929]: 2026-01-31 08:09:43.456693532 +0000 UTC m=+0.572293046 container died 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 03:09:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 03:09:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 03:09:43 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 31 03:09:43 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 31 03:09:43 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941-merged.mount: Deactivated successfully.
Jan 31 03:09:43 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 87 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=15.185071945s) [1] r=-1 lpr=87 pi=[67,87)/1 crt=39'39 active pruub 162.757888794s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:43 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 87 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=15.185009003s) [1] r=-1 lpr=87 pi=[67,87)/1 crt=39'39 unknown NOTIFY pruub 162.757888794s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:43 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 87 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87) [1] r=0 lpr=87 pi=[67,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 2 active+remapped, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 2 objects/s recovering
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 03:09:43 np0005603663 podman[98929]: 2026-01-31 08:09:43.794157614 +0000 UTC m=+0.909757148 container remove 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:43 np0005603663 systemd[1]: libpod-conmon-03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28.scope: Deactivated successfully.
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 03:09:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.283598167 +0000 UTC m=+0.092483817 container create 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.222977937 +0000 UTC m=+0.031863447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:09:44 np0005603663 systemd[1]: Started libpod-conmon-19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91.scope.
Jan 31 03:09:44 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.423909814 +0000 UTC m=+0.232795314 container init 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.433377655 +0000 UTC m=+0.242263085 container start 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:09:44 np0005603663 nifty_leavitt[99047]: 167 167
Jan 31 03:09:44 np0005603663 systemd[1]: libpod-19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91.scope: Deactivated successfully.
Jan 31 03:09:44 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 03:09:44 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 03:09:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.464898141 +0000 UTC m=+0.273783561 container attach 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.465408016 +0000 UTC m=+0.274293436 container died 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 03:09:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 03:09:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 03:09:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 03:09:44 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.368991852s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=41'483 lcod 0'0 active pruub 153.902481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.369693756s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=76'486 lcod 76'486 active pruub 153.903579712s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.369609833s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=76'486 lcod 76'486 unknown NOTIFY pruub 153.903579712s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.368498802s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 153.902481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:44 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 88 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88) [2] r=0 lpr=88 pi=[50,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:44 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 88 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88) [2] r=0 lpr=88 pi=[50,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=87/88 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87) [1] r=0 lpr=87 pi=[67,87)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:44 np0005603663 systemd[1]: var-lib-containers-storage-overlay-da25fe6ad9b8efeb1e9e99e075758e977306b49d2c3140fdb551ad5df98b9db0-merged.mount: Deactivated successfully.
Jan 31 03:09:44 np0005603663 podman[99031]: 2026-01-31 08:09:44.838393822 +0000 UTC m=+0.647279322 container remove 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:09:44 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 03:09:44 np0005603663 systemd[1]: libpod-conmon-19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91.scope: Deactivated successfully.
Jan 31 03:09:44 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 03:09:45 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 03:09:45 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 03:09:45 np0005603663 podman[99072]: 2026-01-31 08:09:44.979480562 +0000 UTC m=+0.024294882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:09:45 np0005603663 podman[99072]: 2026-01-31 08:09:45.129139716 +0000 UTC m=+0.173954026 container create 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:09:45 np0005603663 systemd[1]: Started libpod-conmon-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope.
Jan 31 03:09:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:09:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:45 np0005603663 podman[99072]: 2026-01-31 08:09:45.306782052 +0000 UTC m=+0.351596452 container init 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:09:45 np0005603663 podman[99072]: 2026-01-31 08:09:45.312839342 +0000 UTC m=+0.357653672 container start 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:09:45 np0005603663 podman[99072]: 2026-01-31 08:09:45.396070453 +0000 UTC m=+0.440884793 container attach 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:09:45 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 03:09:45 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 03:09:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 03:09:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 03:09:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 03:09:45 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 03:09:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:45 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:45 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:45 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:45 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:45 np0005603663 lvm[99164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:09:45 np0005603663 lvm[99164]: VG ceph_vg0 finished
Jan 31 03:09:45 np0005603663 lvm[99167]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:09:45 np0005603663 lvm[99167]: VG ceph_vg1 finished
Jan 31 03:09:45 np0005603663 lvm[99169]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:09:45 np0005603663 lvm[99169]: VG ceph_vg2 finished
Jan 31 03:09:46 np0005603663 loving_meninsky[99088]: {}
Jan 31 03:09:46 np0005603663 systemd[1]: libpod-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope: Deactivated successfully.
Jan 31 03:09:46 np0005603663 systemd[1]: libpod-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope: Consumed 1.005s CPU time.
Jan 31 03:09:46 np0005603663 podman[99072]: 2026-01-31 08:09:46.105527401 +0000 UTC m=+1.150341771 container died 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 03:09:46 np0005603663 systemd[1]: var-lib-containers-storage-overlay-32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06-merged.mount: Deactivated successfully.
Jan 31 03:09:46 np0005603663 podman[99072]: 2026-01-31 08:09:46.373798708 +0000 UTC m=+1.418613038 container remove 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:09:46 np0005603663 systemd[1]: libpod-conmon-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope: Deactivated successfully.
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:09:46 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 03:09:46 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:09:46 np0005603663 systemd-logind[793]: New session 34 of user zuul.
Jan 31 03:09:46 np0005603663 systemd[1]: Started Session 34 of User zuul.
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 03:09:46 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 03:09:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 90 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=89/90 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[50,89)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 90 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=89/90 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[50,89)/1 crt=78'487 lcod 76'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:09:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:09:47 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 31 03:09:47 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 31 03:09:47 np0005603663 python3.9[99364]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:09:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 03:09:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 03:09:47 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 03:09:47 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 03:09:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 03:09:48 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 03:09:48 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=89/90 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.711726189s) [2] async=[2] r=-1 lpr=91 pi=[50,91)/1 crt=41'483 lcod 0'0 active pruub 161.700485229s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:48 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=89/90 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.711628914s) [2] r=-1 lpr=91 pi=[50,91)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 161.700485229s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:48 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=89/90 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.717023849s) [2] async=[2] r=-1 lpr=91 pi=[50,91)/1 crt=78'487 lcod 76'486 active pruub 161.707305908s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:48 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=89/90 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.716917038s) [2] r=-1 lpr=91 pi=[50,91)/1 crt=78'487 lcod 76'486 unknown NOTIFY pruub 161.707305908s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:48 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:48 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:48 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:48 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:48 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 03:09:48 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 03:09:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 03:09:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 03:09:49 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 03:09:49 np0005603663 python3.9[99583]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:09:49 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 92 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=91/92 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:49 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 92 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:49 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 03:09:49 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 03:09:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:09:50 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 31 03:09:50 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 31 03:09:51 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 03:09:51 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 03:09:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 683 B/s wr, 17 op/s; 69 B/s, 2 objects/s recovering
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 03:09:51 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 31 03:09:51 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 03:09:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 03:09:52 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 03:09:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 93 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93 pruub=12.769381523s) [1] r=-1 lpr=93 pi=[71,93)/1 crt=39'39 active pruub 168.799011230s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:52 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 93 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93 pruub=12.769127846s) [1] r=-1 lpr=93 pi=[71,93)/1 crt=39'39 unknown NOTIFY pruub 168.799011230s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:52 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 93 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93) [1] r=0 lpr=93 pi=[71,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 03:09:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 03:09:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 03:09:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 03:09:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 03:09:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 03:09:53 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 03:09:53 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 94 pg[6.d( v 39'39 lc 37'10 (0'0,39'39] local-lis/les=93/94 n=1 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93) [1] r=0 lpr=93 pi=[71,93)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:53 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 03:09:53 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 03:09:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 723 B/s wr, 18 op/s; 73 B/s, 2 objects/s recovering
Jan 31 03:09:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 03:09:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 03:09:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 03:09:53 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 03:09:54 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 03:09:55 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 03:09:55 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 03:09:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 31 03:09:55 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 31 03:09:55 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 03:09:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 03:09:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 03:09:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 03:09:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 03:09:56 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 03:09:56 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 96 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96 pruub=10.486717224s) [2] r=-1 lpr=96 pi=[67,96)/1 crt=39'39 active pruub 170.757995605s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:09:56 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 96 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96 pruub=10.486496925s) [2] r=-1 lpr=96 pi=[67,96)/1 crt=39'39 unknown NOTIFY pruub 170.757995605s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:09:56 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 96 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96) [2] r=0 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:09:56 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 03:09:56 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 03:09:56 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 31 03:09:56 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 03:09:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 03:09:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 31 03:09:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 31 03:09:57 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 03:09:58 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 97 pg[6.f( v 39'39 lc 37'1 (0'0,39'39] local-lis/les=96/97 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96) [2] r=0 lpr=96 pi=[67,96)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:09:58 np0005603663 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 03:09:58 np0005603663 systemd[1]: session-34.scope: Consumed 7.801s CPU time.
Jan 31 03:09:58 np0005603663 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Jan 31 03:09:58 np0005603663 systemd-logind[793]: Removed session 34.
Jan 31 03:09:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 03:09:58 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 31 03:09:58 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 31 03:09:58 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 31 03:09:58 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 31 03:09:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 03:09:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 03:09:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 03:09:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 03:09:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:09:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 03:09:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 03:09:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 31 03:09:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 03:10:00 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 03:10:00 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 03:10:00 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 31 03:10:00 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 31 03:10:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 03:10:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 03:10:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 03:10:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 03:10:01 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 03:10:01 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 03:10:01 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 03:10:01 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 03:10:01 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 03:10:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 objects/s recovering
Jan 31 03:10:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 31 03:10:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 03:10:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 03:10:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 03:10:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 03:10:02 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 03:10:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 03:10:02 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 03:10:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:02 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 31 03:10:02 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 31 03:10:03 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 03:10:03 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 31 03:10:03 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 31 03:10:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Jan 31 03:10:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 31 03:10:03 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 03:10:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 03:10:04 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 03:10:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 03:10:04 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 03:10:04 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 101 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=101 pruub=13.944024086s) [2] r=-1 lpr=101 pi=[64,101)/1 crt=72'484 lcod 72'484 active pruub 182.487518311s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:04 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 101 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=101 pruub=13.943953514s) [2] r=-1 lpr=101 pi=[64,101)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 182.487518311s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:04 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 101 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=101) [2] r=0 lpr=101 pi=[64,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:05 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 03:10:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 03:10:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 31 03:10:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 31 03:10:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 03:10:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 31 03:10:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 31 03:10:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 03:10:05 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 03:10:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 102 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=0 lpr=102 pi=[64,102)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:06 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 102 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=0 lpr=102 pi=[64,102)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:06 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 102 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[64,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:06 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 102 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[64,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 03:10:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 03:10:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 03:10:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 03:10:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 03:10:06 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 03:10:07 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 103 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=102/103 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[64,102)/1 crt=75'485 lcod 72'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:07 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 03:10:07 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 03:10:07 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 03:10:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 31 03:10:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 03:10:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 03:10:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 03:10:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 03:10:08 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 03:10:08 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=102/103 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104 pruub=14.605498314s) [2] async=[2] r=-1 lpr=104 pi=[64,104)/1 crt=75'485 lcod 72'484 active pruub 186.982223511s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:08 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=102/103 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104 pruub=14.605233192s) [2] r=-1 lpr=104 pi=[64,104)/1 crt=75'485 lcod 72'484 unknown NOTIFY pruub 186.982223511s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:08 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=104 pruub=12.902797699s) [1] r=-1 lpr=104 pi=[59,104)/1 crt=41'483 active pruub 185.281417847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:08 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=104 pruub=12.902758598s) [1] r=-1 lpr=104 pi=[59,104)/1 crt=41'483 unknown NOTIFY pruub 185.281417847s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:08 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 31 03:10:08 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104) [2] r=0 lpr=104 pi=[64,104)/1 pct=0'0 crt=75'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:08 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104) [2] r=0 lpr=104 pi=[64,104)/1 crt=75'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:08 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:08 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 31 03:10:09 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 03:10:09 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 03:10:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 03:10:09 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 03:10:09 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 03:10:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 31 03:10:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 03:10:09 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 03:10:09 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 03:10:10 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 03:10:10 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 03:10:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 03:10:10 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 03:10:11 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[59,105)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:11 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[59,105)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 31 03:10:11 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 31 03:10:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 03:10:11 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 105 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=0 lpr=105 pi=[59,105)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:11 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 105 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=0 lpr=105 pi=[59,105)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:11 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 03:10:11 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 31 03:10:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 31 03:10:11 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 105 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=104/105 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104) [2] r=0 lpr=104 pi=[64,104)/1 crt=75'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Jan 31 03:10:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 03:10:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 03:10:11 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 03:10:12 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 03:10:12 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 03:10:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 03:10:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 03:10:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 03:10:13 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 03:10:13 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 106 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=13.781268120s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=41'483 active pruub 179.828384399s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:13 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 106 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=13.781217575s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=41'483 unknown NOTIFY pruub 179.828384399s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:13 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 106 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=106) [0] r=0 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 31 03:10:13 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 107 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=105/107 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] async=[1] r=0 lpr=105 pi=[59,105)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 03:10:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 03:10:14 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 03:10:14 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[79,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:14 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[79,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:14 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=105/107 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108 pruub=15.017568588s) [1] async=[1] r=-1 lpr=108 pi=[59,108)/1 crt=41'483 active pruub 193.725326538s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:14 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=105/107 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108 pruub=15.017430305s) [1] r=-1 lpr=108 pi=[59,108)/1 crt=41'483 unknown NOTIFY pruub 193.725326538s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:14 np0005603663 systemd-logind[793]: New session 35 of user zuul.
Jan 31 03:10:14 np0005603663 systemd[1]: Started Session 35 of User zuul.
Jan 31 03:10:14 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 108 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=0 lpr=108 pi=[79,108)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:14 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 108 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=0 lpr=108 pi=[79,108)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:14 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:14 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:15 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 31 03:10:15 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 31 03:10:15 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 31 03:10:15 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 31 03:10:15 np0005603663 python3.9[99793]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 03:10:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 03:10:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 03:10:16 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 03:10:16 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 31 03:10:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 03:10:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 03:10:16 np0005603663 python3.9[99967]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:10:16 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 31 03:10:17 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 31 03:10:17 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 31 03:10:17 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 109 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[79,108)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:17 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 109 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:17 np0005603663 python3.9[100123]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:10:18 np0005603663 python3.9[100276]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:10:18 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 03:10:18 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 03:10:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 03:10:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 03:10:19 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 03:10:19 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 31 03:10:19 np0005603663 python3.9[100430]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:10:19 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110 pruub=13.738016129s) [0] async=[0] r=-1 lpr=110 pi=[79,110)/1 crt=41'483 active pruub 185.798767090s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:19 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110 pruub=13.737934113s) [0] r=-1 lpr=110 pi=[79,110)/1 crt=41'483 unknown NOTIFY pruub 185.798767090s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:19 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:19 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:19 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 31 03:10:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:20 np0005603663 python3.9[100582]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:10:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 03:10:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 03:10:20 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 31 03:10:20 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 31 03:10:20 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 03:10:20 np0005603663 python3.9[100732]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:10:20 np0005603663 network[100749]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:10:20 np0005603663 network[100750]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:10:20 np0005603663 network[100751]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:10:21 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 111 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=110/111 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:21 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 31 03:10:21 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 31 03:10:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 31 03:10:21 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 31 03:10:21 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 31 03:10:22 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 03:10:22 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 03:10:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 31 03:10:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 31 03:10:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 31 03:10:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:24 np0005603663 python3.9[101011]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:10:25 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 03:10:25 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 03:10:25 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 31 03:10:25 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 31 03:10:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 31 03:10:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 31 03:10:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 03:10:25 np0005603663 python3.9[101161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:10:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 03:10:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 03:10:26 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 03:10:26 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 03:10:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 03:10:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 03:10:26 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 03:10:26 np0005603663 python3.9[101315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:10:27 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 03:10:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 31 03:10:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 31 03:10:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 03:10:27 np0005603663 python3.9[101473]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:10:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 03:10:28 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 03:10:28 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 03:10:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 03:10:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 03:10:28 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 03:10:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 03:10:28 np0005603663 python3.9[101557]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:10:29 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 03:10:29 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 03:10:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 27 B/s, 0 objects/s recovering
Jan 31 03:10:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 31 03:10:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 03:10:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:29 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 03:10:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 03:10:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 03:10:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 03:10:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 03:10:30 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 03:10:30 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 03:10:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 114 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=114 pruub=15.534884453s) [2] r=-1 lpr=114 pi=[60,114)/1 crt=75'486 lcod 75'486 active pruub 210.364212036s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:30 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 114 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=114 pruub=15.534838676s) [2] r=-1 lpr=114 pi=[60,114)/1 crt=75'486 lcod 75'486 unknown NOTIFY pruub 210.364212036s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:30 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=114) [2] r=0 lpr=114 pi=[60,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 03:10:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 03:10:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 03:10:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 03:10:31 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 03:10:31 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[60,115)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:31 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[60,115)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:31 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 115 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=0 lpr=115 pi=[60,115)/1 crt=75'486 lcod 75'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:31 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 115 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=0 lpr=115 pi=[60,115)/1 crt=75'486 lcod 75'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:31 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 03:10:31 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 03:10:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:10:31
Jan 31 03:10:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:10:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:10:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', '.mgr', 'default.rgw.control']
Jan 31 03:10:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:10:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 31 03:10:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 03:10:31 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 03:10:31 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 03:10:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 03:10:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 03:10:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 03:10:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 03:10:32 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:10:32 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:10:32 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:10:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:10:32 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 116 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=115/116 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] async=[2] r=0 lpr=115 pi=[60,115)/1 crt=78'487 lcod 75'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 03:10:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 31 03:10:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 03:10:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 03:10:34 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 03:10:34 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 03:10:34 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 03:10:34 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 03:10:34 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 03:10:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 03:10:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 03:10:34 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 03:10:34 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=115/116 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117 pruub=14.018237114s) [2] async=[2] r=-1 lpr=117 pi=[60,117)/1 crt=78'487 lcod 75'486 active pruub 212.917480469s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:34 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=115/116 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117 pruub=14.018141747s) [2] r=-1 lpr=117 pi=[60,117)/1 crt=78'487 lcod 75'486 unknown NOTIFY pruub 212.917480469s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:34 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117) [2] r=0 lpr=117 pi=[60,117)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:34 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117) [2] r=0 lpr=117 pi=[60,117)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 72 B/s, 1 objects/s recovering
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 03:10:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 118 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=118 pruub=9.449163437s) [0] r=-1 lpr=118 pi=[91,118)/1 crt=78'487 active pruub 197.783279419s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 118 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=118 pruub=9.449121475s) [0] r=-1 lpr=118 pi=[91,118)/1 crt=78'487 unknown NOTIFY pruub 197.783279419s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:35 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 118 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=117/118 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117) [2] r=0 lpr=117 pi=[60,117)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:35 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 118 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=118) [0] r=0 lpr=118 pi=[91,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 03:10:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 03:10:36 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 03:10:36 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 03:10:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 03:10:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 03:10:36 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 03:10:36 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[91,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:36 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[91,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:37 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 119 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=0 lpr=119 pi=[91,119)/1 crt=78'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:37 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 119 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=0 lpr=119 pi=[91,119)/1 crt=78'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 03:10:37 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 03:10:37 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 03:10:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Jan 31 03:10:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 31 03:10:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 03:10:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 03:10:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 03:10:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 03:10:38 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 03:10:38 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 03:10:38 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 120 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=119/120 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[91,119)/1 crt=78'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 31 03:10:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 31 03:10:39 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 31 03:10:39 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 31 03:10:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 03:10:39 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 31 03:10:39 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 31 03:10:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 03:10:39 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 03:10:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:10:39 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 03:10:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 31 03:10:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 03:10:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 03:10:40 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 03:10:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=119/120 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121 pruub=14.153413773s) [0] async=[0] r=-1 lpr=121 pi=[91,121)/1 crt=78'487 active pruub 206.638427734s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:40 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=119/120 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121 pruub=14.153292656s) [0] r=-1 lpr=121 pi=[91,121)/1 crt=78'487 unknown NOTIFY pruub 206.638427734s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:40 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121) [0] r=0 lpr=121 pi=[91,121)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:40 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121) [0] r=0 lpr=121 pi=[91,121)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 31 03:10:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 31 03:10:40 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 03:10:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 03:10:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 03:10:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 03:10:41 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 03:10:41 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 122 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=122 pruub=15.742437363s) [0] r=-1 lpr=122 pi=[76,122)/1 crt=75'485 active pruub 209.373794556s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:41 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 122 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=122 pruub=15.742307663s) [0] r=-1 lpr=122 pi=[76,122)/1 crt=75'485 unknown NOTIFY pruub 209.373794556s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:41 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=122) [0] r=0 lpr=122 pi=[76,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:41 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 122 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=121/122 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121) [0] r=0 lpr=121 pi=[91,121)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 31 03:10:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 03:10:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:10:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 03:10:42 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 03:10:42 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 03:10:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:10:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 03:10:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 03:10:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 03:10:42 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 03:10:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[76,123)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:42 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[76,123)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:42 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=0 lpr=123 pi=[76,123)/1 crt=75'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:42 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=0 lpr=123 pi=[76,123)/1 crt=75'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:42 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=123 pruub=8.551447868s) [1] r=-1 lpr=123 pi=[79,123)/1 crt=41'483 active pruub 203.832275391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:42 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=123 pruub=8.551359177s) [1] r=-1 lpr=123 pi=[79,123)/1 crt=41'483 unknown NOTIFY pruub 203.832275391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:42 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=123) [1] r=0 lpr=123 pi=[79,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.67515674950501e-06 of space, bias 4.0, pg target 0.0032101880994060117 quantized to 16 (current 16)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.387758839617113e-06 of space, bias 1.0, pg target 0.0013163276518851337 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:10:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:10:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 03:10:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 03:10:43 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 03:10:43 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 03:10:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 03:10:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 03:10:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 31 03:10:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 03:10:43 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 03:10:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[79,124)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:44 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[79,124)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:44 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 124 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=0 lpr=124 pi=[79,124)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:44 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 124 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=0 lpr=124 pi=[79,124)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:44 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 124 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=123/124 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] async=[0] r=0 lpr=123 pi=[76,123)/1 crt=75'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 03:10:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 03:10:45 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 03:10:45 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 03:10:45 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 03:10:45 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 125 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=124/125 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] async=[1] r=0 lpr=124 pi=[79,124)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 remapped+peering, 1 active+recovering+remapped, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/247 objects misplaced (1.619%); 33 B/s, 0 objects/s recovering
Jan 31 03:10:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 03:10:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 03:10:46 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 03:10:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126) [1] r=0 lpr=126 pi=[79,126)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:46 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126) [1] r=0 lpr=126 pi=[79,126)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=124/125 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126 pruub=14.954847336s) [1] async=[1] r=-1 lpr=126 pi=[79,126)/1 crt=41'483 active pruub 214.125625610s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=124/125 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126 pruub=14.954668999s) [1] r=-1 lpr=126 pi=[79,126)/1 crt=41'483 unknown NOTIFY pruub 214.125625610s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=123/124 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126 pruub=13.700307846s) [0] async=[0] r=-1 lpr=126 pi=[76,126)/1 crt=75'485 active pruub 212.871459961s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:46 np0005603663 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=123/124 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126 pruub=13.700231552s) [0] r=-1 lpr=126 pi=[76,126)/1 crt=75'485 unknown NOTIFY pruub 212.871459961s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 03:10:46 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126) [0] r=0 lpr=126 pi=[76,126)/1 pct=0'0 crt=75'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 03:10:46 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126) [0] r=0 lpr=126 pi=[76,126)/1 crt=75'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 03:10:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 03:10:47 np0005603663 podman[101783]: 2026-01-31 08:10:47.60354495 +0000 UTC m=+0.072707492 container create b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 03:10:47 np0005603663 podman[101783]: 2026-01-31 08:10:47.550777322 +0000 UTC m=+0.019939854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:10:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 remapped+peering, 1 active+recovering+remapped, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/247 objects misplaced (1.619%); 29 B/s, 0 objects/s recovering
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 03:10:47 np0005603663 systemd[1]: Started libpod-conmon-b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a.scope.
Jan 31 03:10:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:10:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:10:47 np0005603663 ceph-osd[87035]: osd.1 pg_epoch: 127 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=126/127 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126) [1] r=0 lpr=126 pi=[79,126)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:47 np0005603663 ceph-osd[85971]: osd.0 pg_epoch: 127 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=126/127 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126) [0] r=0 lpr=126 pi=[76,126)/1 crt=75'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 03:10:47 np0005603663 podman[101783]: 2026-01-31 08:10:47.964514099 +0000 UTC m=+0.433676631 container init b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 03:10:47 np0005603663 podman[101783]: 2026-01-31 08:10:47.972084402 +0000 UTC m=+0.441246904 container start b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:10:47 np0005603663 naughty_matsumoto[101802]: 167 167
Jan 31 03:10:47 np0005603663 systemd[1]: libpod-b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a.scope: Deactivated successfully.
Jan 31 03:10:47 np0005603663 podman[101783]: 2026-01-31 08:10:47.984796251 +0000 UTC m=+0.453958783 container attach b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:10:47 np0005603663 podman[101783]: 2026-01-31 08:10:47.986628453 +0000 UTC m=+0.455790955 container died b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:10:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-130a962117b127d9d8c47844d31ffa3f36dc9080894e50343da76079d0e52d1a-merged.mount: Deactivated successfully.
Jan 31 03:10:49 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 03:10:49 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 03:10:49 np0005603663 podman[101783]: 2026-01-31 08:10:49.279654715 +0000 UTC m=+1.748817257 container remove b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:10:49 np0005603663 systemd[1]: libpod-conmon-b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a.scope: Deactivated successfully.
Jan 31 03:10:49 np0005603663 podman[101835]: 2026-01-31 08:10:49.414175349 +0000 UTC m=+0.026310453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:10:49 np0005603663 podman[101835]: 2026-01-31 08:10:49.536670263 +0000 UTC m=+0.148805367 container create 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:10:49 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 03:10:49 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 03:10:49 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 03:10:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 remapped+peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Jan 31 03:10:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:49 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 03:10:50 np0005603663 systemd[1]: Started libpod-conmon-6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786.scope.
Jan 31 03:10:50 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:10:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 03:10:50 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 03:10:50 np0005603663 podman[101835]: 2026-01-31 08:10:50.257064828 +0000 UTC m=+0.869199912 container init 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:10:50 np0005603663 podman[101835]: 2026-01-31 08:10:50.26599926 +0000 UTC m=+0.878134324 container start 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:10:50 np0005603663 podman[101835]: 2026-01-31 08:10:50.347116418 +0000 UTC m=+0.959251542 container attach 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:10:50 np0005603663 inspiring_swirles[101857]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:10:50 np0005603663 inspiring_swirles[101857]: --> All data devices are unavailable
Jan 31 03:10:50 np0005603663 systemd[1]: libpod-6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786.scope: Deactivated successfully.
Jan 31 03:10:50 np0005603663 podman[101835]: 2026-01-31 08:10:50.719017355 +0000 UTC m=+1.331152439 container died 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:10:51 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400-merged.mount: Deactivated successfully.
Jan 31 03:10:51 np0005603663 podman[101835]: 2026-01-31 08:10:51.404012892 +0000 UTC m=+2.016147996 container remove 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:10:51 np0005603663 systemd[1]: libpod-conmon-6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786.scope: Deactivated successfully.
Jan 31 03:10:51 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 03:10:51 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 03:10:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Jan 31 03:10:51 np0005603663 podman[101974]: 2026-01-31 08:10:51.823908483 +0000 UTC m=+0.031331934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:10:52 np0005603663 podman[101974]: 2026-01-31 08:10:52.083428762 +0000 UTC m=+0.290852233 container create 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:10:52 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 31 03:10:52 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 31 03:10:52 np0005603663 systemd[1]: Started libpod-conmon-526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d.scope.
Jan 31 03:10:52 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 03:10:52 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 03:10:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:10:52 np0005603663 podman[101974]: 2026-01-31 08:10:52.825005493 +0000 UTC m=+1.032428944 container init 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:10:52 np0005603663 podman[101974]: 2026-01-31 08:10:52.829396977 +0000 UTC m=+1.036820408 container start 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:10:52 np0005603663 cool_mcclintock[101990]: 167 167
Jan 31 03:10:52 np0005603663 systemd[1]: libpod-526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d.scope: Deactivated successfully.
Jan 31 03:10:52 np0005603663 podman[101974]: 2026-01-31 08:10:52.936309222 +0000 UTC m=+1.143732653 container attach 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:10:52 np0005603663 podman[101974]: 2026-01-31 08:10:52.936758105 +0000 UTC m=+1.144181536 container died 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:10:53 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 31 03:10:53 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 31 03:10:53 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bcaad873c5eb7e203e7d83d58f3db4f2dd04b85a100d222a8f383950ab0ab8e0-merged.mount: Deactivated successfully.
Jan 31 03:10:53 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 03:10:53 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 03:10:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Jan 31 03:10:53 np0005603663 podman[101974]: 2026-01-31 08:10:53.931511847 +0000 UTC m=+2.138935298 container remove 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:10:54 np0005603663 systemd[1]: libpod-conmon-526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d.scope: Deactivated successfully.
Jan 31 03:10:54 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 03:10:54 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 03:10:54 np0005603663 podman[102013]: 2026-01-31 08:10:54.044425431 +0000 UTC m=+0.017859455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:10:54 np0005603663 podman[102013]: 2026-01-31 08:10:54.37915509 +0000 UTC m=+0.352589064 container create fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:10:54 np0005603663 systemd[1]: Started libpod-conmon-fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2.scope.
Jan 31 03:10:54 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:10:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603663 podman[102013]: 2026-01-31 08:10:54.784435379 +0000 UTC m=+0.757869403 container init fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:10:54 np0005603663 podman[102013]: 2026-01-31 08:10:54.794354379 +0000 UTC m=+0.767788343 container start fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:10:54 np0005603663 podman[102013]: 2026-01-31 08:10:54.865888596 +0000 UTC m=+0.839322620 container attach fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:10:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]: {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:    "0": [
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:        {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "devices": [
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "/dev/loop3"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            ],
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_name": "ceph_lv0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_size": "21470642176",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "name": "ceph_lv0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "tags": {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cluster_name": "ceph",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.crush_device_class": "",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.encrypted": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.objectstore": "bluestore",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osd_id": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.type": "block",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.vdo": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.with_tpm": "0"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            },
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "type": "block",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "vg_name": "ceph_vg0"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:        }
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:    ],
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:    "1": [
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:        {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "devices": [
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "/dev/loop4"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            ],
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_name": "ceph_lv1",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_size": "21470642176",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "name": "ceph_lv1",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "tags": {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cluster_name": "ceph",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.crush_device_class": "",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.encrypted": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.objectstore": "bluestore",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osd_id": "1",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.type": "block",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.vdo": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.with_tpm": "0"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            },
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "type": "block",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "vg_name": "ceph_vg1"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:        }
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:    ],
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:    "2": [
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:        {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "devices": [
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "/dev/loop5"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            ],
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_name": "ceph_lv2",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_size": "21470642176",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "name": "ceph_lv2",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "tags": {
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.cluster_name": "ceph",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.crush_device_class": "",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.encrypted": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.objectstore": "bluestore",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osd_id": "2",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.type": "block",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.vdo": "0",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:                "ceph.with_tpm": "0"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            },
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "type": "block",
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:            "vg_name": "ceph_vg2"
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:        }
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]:    ]
Jan 31 03:10:55 np0005603663 elegant_shannon[102030]: }
Jan 31 03:10:55 np0005603663 systemd[1]: libpod-fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2.scope: Deactivated successfully.
Jan 31 03:10:55 np0005603663 podman[102013]: 2026-01-31 08:10:55.108145788 +0000 UTC m=+1.081579782 container died fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:10:55 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce-merged.mount: Deactivated successfully.
Jan 31 03:10:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 03:10:55 np0005603663 podman[102013]: 2026-01-31 08:10:55.876370581 +0000 UTC m=+1.849804585 container remove fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:10:55 np0005603663 systemd[1]: libpod-conmon-fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2.scope: Deactivated successfully.
Jan 31 03:10:56 np0005603663 podman[102113]: 2026-01-31 08:10:56.322945441 +0000 UTC m=+0.025323140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:10:56 np0005603663 podman[102113]: 2026-01-31 08:10:56.523479332 +0000 UTC m=+0.225857051 container create 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:10:56 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 03:10:56 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 03:10:56 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 31 03:10:56 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 31 03:10:56 np0005603663 systemd[1]: Started libpod-conmon-7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383.scope.
Jan 31 03:10:56 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:10:57 np0005603663 podman[102113]: 2026-01-31 08:10:57.031638896 +0000 UTC m=+0.734016665 container init 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:10:57 np0005603663 podman[102113]: 2026-01-31 08:10:57.036549217 +0000 UTC m=+0.738926896 container start 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:10:57 np0005603663 charming_wu[102129]: 167 167
Jan 31 03:10:57 np0005603663 systemd[1]: libpod-7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383.scope: Deactivated successfully.
Jan 31 03:10:57 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 31 03:10:57 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 31 03:10:57 np0005603663 podman[102113]: 2026-01-31 08:10:57.285645881 +0000 UTC m=+0.988023610 container attach 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:57 np0005603663 podman[102113]: 2026-01-31 08:10:57.286096543 +0000 UTC m=+0.988474282 container died 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:10:57 np0005603663 systemd[1]: var-lib-containers-storage-overlay-30eef2469e24f8ed92f34274b05b3f6396b0fe465cce6b5ce03bbc7879acb1b4-merged.mount: Deactivated successfully.
Jan 31 03:10:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 31 03:10:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 31 03:10:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 03:10:57 np0005603663 podman[102113]: 2026-01-31 08:10:57.976554768 +0000 UTC m=+1.678932437 container remove 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:10:58 np0005603663 systemd[1]: libpod-conmon-7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383.scope: Deactivated successfully.
Jan 31 03:10:58 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 03:10:58 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 03:10:58 np0005603663 podman[102154]: 2026-01-31 08:10:58.206173119 +0000 UTC m=+0.115132370 container create a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:10:58 np0005603663 podman[102154]: 2026-01-31 08:10:58.112977148 +0000 UTC m=+0.021936439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:10:58 np0005603663 systemd[1]: Started libpod-conmon-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope.
Jan 31 03:10:58 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:10:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:58 np0005603663 podman[102154]: 2026-01-31 08:10:58.914976216 +0000 UTC m=+0.823935547 container init a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:10:58 np0005603663 podman[102154]: 2026-01-31 08:10:58.925395626 +0000 UTC m=+0.834354897 container start a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:10:59 np0005603663 podman[102154]: 2026-01-31 08:10:59.423590252 +0000 UTC m=+1.332549503 container attach a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:10:59 np0005603663 lvm[102255]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:10:59 np0005603663 lvm[102255]: VG ceph_vg0 finished
Jan 31 03:10:59 np0005603663 lvm[102258]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:10:59 np0005603663 lvm[102258]: VG ceph_vg1 finished
Jan 31 03:10:59 np0005603663 lvm[102259]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:10:59 np0005603663 lvm[102259]: VG ceph_vg2 finished
Jan 31 03:10:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 03:10:59 np0005603663 affectionate_gauss[102179]: {}
Jan 31 03:10:59 np0005603663 systemd[1]: libpod-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope: Deactivated successfully.
Jan 31 03:10:59 np0005603663 podman[102154]: 2026-01-31 08:10:59.860482703 +0000 UTC m=+1.769441994 container died a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:10:59 np0005603663 systemd[1]: libpod-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope: Consumed 1.055s CPU time.
Jan 31 03:10:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:00 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de-merged.mount: Deactivated successfully.
Jan 31 03:11:00 np0005603663 podman[102154]: 2026-01-31 08:11:00.632071145 +0000 UTC m=+2.541030396 container remove a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:11:00 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 31 03:11:00 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 31 03:11:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:11:00 np0005603663 systemd[1]: libpod-conmon-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope: Deactivated successfully.
Jan 31 03:11:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:11:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:11:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:11:01 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 03:11:01 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 03:11:01 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 31 03:11:01 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 31 03:11:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 03:11:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:11:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:11:02 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 03:11:02 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 03:11:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:03 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 03:11:03 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 03:11:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 03:11:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 03:11:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:06 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 03:11:06 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 03:11:07 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 03:11:07 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 03:11:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:08 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 31 03:11:08 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 31 03:11:08 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 31 03:11:08 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 31 03:11:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 31 03:11:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 31 03:11:11 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 03:11:11 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 03:11:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:12 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 03:11:12 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 03:11:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:14 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 03:11:14 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 03:11:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:15 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 03:11:15 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 03:11:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:16 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 31 03:11:16 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 31 03:11:16 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 31 03:11:16 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 31 03:11:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 03:11:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 03:11:17 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 03:11:17 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 03:11:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:18 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 31 03:11:18 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 31 03:11:19 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 03:11:19 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 03:11:19 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 03:11:19 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 03:11:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:20 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 03:11:20 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 03:11:20 np0005603663 python3.9[102472]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:11:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:22 np0005603663 python3.9[102759]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 03:11:23 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 03:11:23 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 03:11:23 np0005603663 python3.9[102911]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 03:11:23 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 03:11:23 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 03:11:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:23 np0005603663 python3.9[103063]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:11:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 03:11:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 03:11:24 np0005603663 python3.9[103215]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 03:11:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:26 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 31 03:11:26 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 31 03:11:26 np0005603663 python3.9[103367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:11:26 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 03:11:26 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 03:11:26 np0005603663 python3.9[103519]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:11:26 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 03:11:27 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 03:11:27 np0005603663 python3.9[103597]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:11:27 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 31 03:11:27 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 31 03:11:27 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 03:11:27 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 03:11:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:28 np0005603663 python3.9[103749]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:11:28 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 03:11:28 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 03:11:29 np0005603663 python3.9[103903]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 03:11:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:30 np0005603663 python3.9[104056]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 03:11:30 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 31 03:11:30 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 31 03:11:30 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 31 03:11:30 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 31 03:11:31 np0005603663 python3.9[104209]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 03:11:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:11:31
Jan 31 03:11:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:11:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:11:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta']
Jan 31 03:11:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:11:31 np0005603663 python3.9[104361]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 03:11:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:32 np0005603663 python3.9[104513]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:11:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:11:32 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 03:11:32 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 03:11:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:34 np0005603663 python3.9[104667]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:11:34 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 03:11:34 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 03:11:34 np0005603663 python3.9[104819]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:11:35 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 31 03:11:35 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 31 03:11:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:35 np0005603663 python3.9[104897]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:11:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:36 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 03:11:36 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 03:11:36 np0005603663 python3.9[105049]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:11:36 np0005603663 python3.9[105127]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:11:36 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 03:11:36 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 03:11:37 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 03:11:37 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 03:11:37 np0005603663 python3.9[105279]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:11:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:39 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 03:11:39 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 03:11:39 np0005603663 python3.9[105430]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:11:39 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 03:11:39 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 03:11:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:39 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 31 03:11:39 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 31 03:11:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 03:11:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 03:11:40 np0005603663 python3.9[105582]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 03:11:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:40 np0005603663 python3.9[105732]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:11:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 03:11:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 03:11:41 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 31 03:11:41 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 31 03:11:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:41 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 31 03:11:41 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 31 03:11:42 np0005603663 python3.9[105884]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:11:42 np0005603663 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 03:11:42 np0005603663 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 03:11:42 np0005603663 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 03:11:42 np0005603663 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 03:11:42 np0005603663 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 03:11:42 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 03:11:42 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 03:11:42 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:11:43 np0005603663 python3.9[106047]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:11:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 03:11:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 03:11:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:45 np0005603663 python3.9[106199]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:11:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:45 np0005603663 python3.9[106353]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:11:45 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 03:11:45 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 03:11:46 np0005603663 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 03:11:46 np0005603663 systemd[1]: session-35.scope: Consumed 1min 1.798s CPU time.
Jan 31 03:11:46 np0005603663 systemd-logind[793]: Session 35 logged out. Waiting for processes to exit.
Jan 31 03:11:46 np0005603663 systemd-logind[793]: Removed session 35.
Jan 31 03:11:46 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 03:11:46 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 03:11:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:47 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 03:11:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 03:11:47 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 03:11:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 03:11:48 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 03:11:48 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 03:11:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:49 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 03:11:49 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 03:11:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:50 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 03:11:50 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 03:11:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:52 np0005603663 systemd-logind[793]: New session 36 of user zuul.
Jan 31 03:11:52 np0005603663 systemd[1]: Started Session 36 of User zuul.
Jan 31 03:11:52 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 03:11:52 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 03:11:52 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 03:11:52 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 03:11:53 np0005603663 python3.9[106533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:11:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:54 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 03:11:54 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 03:11:54 np0005603663 python3.9[106689]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 03:11:55 np0005603663 python3.9[106842]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:11:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:11:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:55 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 31 03:11:55 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 31 03:11:55 np0005603663 python3.9[106926]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 03:11:56 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 31 03:11:56 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 31 03:11:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 31 03:11:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 31 03:11:57 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 03:11:57 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 03:11:58 np0005603663 python3.9[107080]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:11:58 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 31 03:11:58 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 31 03:11:58 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 31 03:11:58 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 31 03:11:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:11:59 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 31 03:11:59 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 31 03:12:00 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 03:12:00 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 03:12:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:00 np0005603663 python3.9[107233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:12:01 np0005603663 python3.9[107386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:12:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:12:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:12:02 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 03:12:02 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 03:12:02 np0005603663 python3.9[107651]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.241756264 +0000 UTC m=+0.019889422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.355484474 +0000 UTC m=+0.133617642 container create 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:12:02 np0005603663 systemd[76601]: Created slice User Background Tasks Slice.
Jan 31 03:12:02 np0005603663 systemd[76601]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 03:12:02 np0005603663 systemd[1]: Started libpod-conmon-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope.
Jan 31 03:12:02 np0005603663 systemd[76601]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 03:12:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.459902237 +0000 UTC m=+0.238035505 container init 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.467545289 +0000 UTC m=+0.245678467 container start 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:12:02 np0005603663 determined_boyd[107700]: 167 167
Jan 31 03:12:02 np0005603663 systemd[1]: libpod-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope: Deactivated successfully.
Jan 31 03:12:02 np0005603663 conmon[107700]: conmon 28ba0dcea78e3eb67911 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope/container/memory.events
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.535531172 +0000 UTC m=+0.313664320 container attach 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.537079165 +0000 UTC m=+0.315212313 container died 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:12:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:12:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:12:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:12:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9a542f14f33c45870701405b7be590da40a570738c072df8d43137e29dd98fe7-merged.mount: Deactivated successfully.
Jan 31 03:12:02 np0005603663 podman[107683]: 2026-01-31 08:12:02.966727097 +0000 UTC m=+0.744860235 container remove 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:12:03 np0005603663 systemd[1]: libpod-conmon-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope: Deactivated successfully.
Jan 31 03:12:03 np0005603663 python3.9[107868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:12:03 np0005603663 podman[107876]: 2026-01-31 08:12:03.17587171 +0000 UTC m=+0.093174142 container create c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:12:03 np0005603663 podman[107876]: 2026-01-31 08:12:03.109906823 +0000 UTC m=+0.027209275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:12:03 np0005603663 systemd[1]: Started libpod-conmon-c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d.scope.
Jan 31 03:12:03 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:12:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:03 np0005603663 podman[107876]: 2026-01-31 08:12:03.327866631 +0000 UTC m=+0.245169143 container init c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:12:03 np0005603663 podman[107876]: 2026-01-31 08:12:03.334650789 +0000 UTC m=+0.251953231 container start c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:12:03 np0005603663 podman[107876]: 2026-01-31 08:12:03.381661501 +0000 UTC m=+0.298964043 container attach c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:12:03 np0005603663 objective_khayyam[107897]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:12:03 np0005603663 objective_khayyam[107897]: --> All data devices are unavailable
Jan 31 03:12:03 np0005603663 systemd[1]: libpod-c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d.scope: Deactivated successfully.
Jan 31 03:12:03 np0005603663 podman[107876]: 2026-01-31 08:12:03.762695215 +0000 UTC m=+0.679997647 container died c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:12:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:03 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 31 03:12:03 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 31 03:12:04 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41-merged.mount: Deactivated successfully.
Jan 31 03:12:04 np0005603663 python3.9[108082]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:12:04 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 31 03:12:04 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 31 03:12:04 np0005603663 podman[107876]: 2026-01-31 08:12:04.113578735 +0000 UTC m=+1.030881177 container remove c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:12:04 np0005603663 systemd[1]: libpod-conmon-c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d.scope: Deactivated successfully.
Jan 31 03:12:04 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 03:12:04 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 03:12:04 np0005603663 podman[108151]: 2026-01-31 08:12:04.796090511 +0000 UTC m=+0.020965271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:12:04 np0005603663 podman[108151]: 2026-01-31 08:12:04.919597313 +0000 UTC m=+0.144472083 container create f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:12:04 np0005603663 systemd[1]: Started libpod-conmon-f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf.scope.
Jan 31 03:12:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:12:05 np0005603663 podman[108151]: 2026-01-31 08:12:05.014764079 +0000 UTC m=+0.239639299 container init f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:12:05 np0005603663 podman[108151]: 2026-01-31 08:12:05.023164512 +0000 UTC m=+0.248039252 container start f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:12:05 np0005603663 competent_bohr[108166]: 167 167
Jan 31 03:12:05 np0005603663 systemd[1]: libpod-f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf.scope: Deactivated successfully.
Jan 31 03:12:05 np0005603663 podman[108151]: 2026-01-31 08:12:05.036512831 +0000 UTC m=+0.261387601 container attach f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:12:05 np0005603663 podman[108151]: 2026-01-31 08:12:05.036929003 +0000 UTC m=+0.261803743 container died f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:12:05 np0005603663 systemd[1]: var-lib-containers-storage-overlay-097b225f4082d498242581bde5d605881fb6107b2f5c3f71b0c58538869c289b-merged.mount: Deactivated successfully.
Jan 31 03:12:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:05 np0005603663 podman[108151]: 2026-01-31 08:12:05.261106003 +0000 UTC m=+0.485980773 container remove f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:12:05 np0005603663 systemd[1]: libpod-conmon-f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf.scope: Deactivated successfully.
Jan 31 03:12:05 np0005603663 podman[108216]: 2026-01-31 08:12:05.49492926 +0000 UTC m=+0.115433499 container create b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:12:05 np0005603663 podman[108216]: 2026-01-31 08:12:05.417896276 +0000 UTC m=+0.038400605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:12:05 np0005603663 systemd[1]: Started libpod-conmon-b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab.scope.
Jan 31 03:12:05 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:12:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:05 np0005603663 podman[108216]: 2026-01-31 08:12:05.741792659 +0000 UTC m=+0.362296988 container init b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:12:05 np0005603663 podman[108216]: 2026-01-31 08:12:05.748517695 +0000 UTC m=+0.369021974 container start b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:12:05 np0005603663 podman[108216]: 2026-01-31 08:12:05.77180739 +0000 UTC m=+0.392311729 container attach b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:12:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:06 np0005603663 epic_wiles[108284]: {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:    "0": [
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:        {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "devices": [
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "/dev/loop3"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            ],
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_name": "ceph_lv0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_size": "21470642176",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "name": "ceph_lv0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "tags": {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cluster_name": "ceph",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.crush_device_class": "",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.encrypted": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.objectstore": "bluestore",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osd_id": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.type": "block",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.vdo": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.with_tpm": "0"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            },
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "type": "block",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "vg_name": "ceph_vg0"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:        }
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:    ],
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:    "1": [
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:        {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "devices": [
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "/dev/loop4"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            ],
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_name": "ceph_lv1",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_size": "21470642176",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "name": "ceph_lv1",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "tags": {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cluster_name": "ceph",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.crush_device_class": "",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.encrypted": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.objectstore": "bluestore",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osd_id": "1",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.type": "block",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.vdo": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.with_tpm": "0"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            },
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "type": "block",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "vg_name": "ceph_vg1"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:        }
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:    ],
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:    "2": [
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:        {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "devices": [
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "/dev/loop5"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            ],
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_name": "ceph_lv2",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_size": "21470642176",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "name": "ceph_lv2",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "tags": {
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.cluster_name": "ceph",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.crush_device_class": "",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.encrypted": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.objectstore": "bluestore",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osd_id": "2",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.type": "block",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.vdo": "0",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:                "ceph.with_tpm": "0"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            },
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "type": "block",
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:            "vg_name": "ceph_vg2"
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:        }
Jan 31 03:12:06 np0005603663 epic_wiles[108284]:    ]
Jan 31 03:12:06 np0005603663 epic_wiles[108284]: }
Jan 31 03:12:06 np0005603663 python3.9[108364]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:12:06 np0005603663 systemd[1]: libpod-b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab.scope: Deactivated successfully.
Jan 31 03:12:06 np0005603663 podman[108216]: 2026-01-31 08:12:06.053517744 +0000 UTC m=+0.674021983 container died b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:12:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73-merged.mount: Deactivated successfully.
Jan 31 03:12:06 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 03:12:06 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 03:12:06 np0005603663 podman[108216]: 2026-01-31 08:12:06.710533544 +0000 UTC m=+1.331037803 container remove b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:12:06 np0005603663 systemd[1]: libpod-conmon-b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab.scope: Deactivated successfully.
Jan 31 03:12:06 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 03:12:06 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.088017631 +0000 UTC m=+0.017926888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.196156476 +0000 UTC m=+0.126065693 container create 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:12:07 np0005603663 systemd[1]: Started libpod-conmon-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope.
Jan 31 03:12:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.521687553 +0000 UTC m=+0.451596810 container init 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.52879726 +0000 UTC m=+0.458706497 container start 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.533711326 +0000 UTC m=+0.463620573 container attach 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:12:07 np0005603663 systemd[1]: libpod-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope: Deactivated successfully.
Jan 31 03:12:07 np0005603663 tender_austin[108748]: 167 167
Jan 31 03:12:07 np0005603663 conmon[108748]: conmon 2eb3ec876a50b90ca122 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope/container/memory.events
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.539446665 +0000 UTC m=+0.469355882 container died 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:12:07 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5b6c4ddbe4b9c432c5bbb9e770ed12c20e686f5bac3df614e88240dded7f7380-merged.mount: Deactivated successfully.
Jan 31 03:12:07 np0005603663 python3.9[108745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 03:12:07 np0005603663 podman[108656]: 2026-01-31 08:12:07.588804252 +0000 UTC m=+0.518713489 container remove 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:12:07 np0005603663 systemd[1]: libpod-conmon-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope: Deactivated successfully.
Jan 31 03:12:07 np0005603663 podman[108796]: 2026-01-31 08:12:07.7403518 +0000 UTC m=+0.066223845 container create fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:12:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:07 np0005603663 systemd[1]: Started libpod-conmon-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope.
Jan 31 03:12:07 np0005603663 podman[108796]: 2026-01-31 08:12:07.707979723 +0000 UTC m=+0.033851819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:12:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:12:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:07 np0005603663 podman[108796]: 2026-01-31 08:12:07.887073325 +0000 UTC m=+0.212945410 container init fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:12:07 np0005603663 podman[108796]: 2026-01-31 08:12:07.892060633 +0000 UTC m=+0.217932638 container start fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:12:07 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 03:12:07 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 03:12:07 np0005603663 podman[108796]: 2026-01-31 08:12:07.958691339 +0000 UTC m=+0.284563424 container attach fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:12:08 np0005603663 python3.9[108952]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:12:08 np0005603663 lvm[109043]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:12:08 np0005603663 lvm[109042]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:12:08 np0005603663 lvm[109043]: VG ceph_vg1 finished
Jan 31 03:12:08 np0005603663 lvm[109042]: VG ceph_vg0 finished
Jan 31 03:12:08 np0005603663 lvm[109045]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:12:08 np0005603663 lvm[109045]: VG ceph_vg2 finished
Jan 31 03:12:08 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 31 03:12:08 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 31 03:12:08 np0005603663 loving_volhard[108853]: {}
Jan 31 03:12:08 np0005603663 systemd[1]: libpod-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope: Deactivated successfully.
Jan 31 03:12:08 np0005603663 systemd[1]: libpod-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope: Consumed 1.015s CPU time.
Jan 31 03:12:08 np0005603663 podman[108796]: 2026-01-31 08:12:08.654472413 +0000 UTC m=+0.980344418 container died fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:12:08 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 03:12:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e-merged.mount: Deactivated successfully.
Jan 31 03:12:08 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 03:12:09 np0005603663 python3.9[109186]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:12:09 np0005603663 podman[108796]: 2026-01-31 08:12:09.388787044 +0000 UTC m=+1.714659059 container remove fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 03:12:09 np0005603663 systemd[1]: libpod-conmon-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope: Deactivated successfully.
Jan 31 03:12:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:12:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:12:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:12:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:12:09 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:12:09 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:12:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:10 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 31 03:12:10 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 31 03:12:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:10 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 03:12:10 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 03:12:11 np0005603663 python3.9[109365]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:12:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 03:12:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 03:12:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:11 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 03:12:12 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 03:12:13 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 03:12:13 np0005603663 python3.9[109518]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:12:13 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 03:12:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:13 np0005603663 python3.9[109672]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 03:12:13 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 03:12:14 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 03:12:14 np0005603663 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 03:12:14 np0005603663 systemd[1]: session-36.scope: Consumed 16.357s CPU time.
Jan 31 03:12:14 np0005603663 systemd-logind[793]: Session 36 logged out. Waiting for processes to exit.
Jan 31 03:12:14 np0005603663 systemd-logind[793]: Removed session 36.
Jan 31 03:12:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:16 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 03:12:16 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 03:12:16 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 31 03:12:16 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 31 03:12:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 31 03:12:16 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 31 03:12:17 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 03:12:17 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 03:12:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:18 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 31 03:12:18 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 31 03:12:18 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 03:12:18 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 03:12:19 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 31 03:12:19 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 31 03:12:19 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 03:12:19 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 03:12:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:19 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 03:12:19 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 03:12:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:20 np0005603663 systemd-logind[793]: New session 37 of user zuul.
Jan 31 03:12:20 np0005603663 systemd[1]: Started Session 37 of User zuul.
Jan 31 03:12:20 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 03:12:20 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 03:12:21 np0005603663 python3.9[109850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:12:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:21 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 31 03:12:21 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 31 03:12:22 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 31 03:12:22 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 31 03:12:22 np0005603663 python3.9[110004]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:12:23 np0005603663 python3.9[110197]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:12:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:23 np0005603663 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 03:12:23 np0005603663 systemd[1]: session-37.scope: Consumed 1.977s CPU time.
Jan 31 03:12:23 np0005603663 systemd-logind[793]: Session 37 logged out. Waiting for processes to exit.
Jan 31 03:12:23 np0005603663 systemd-logind[793]: Removed session 37.
Jan 31 03:12:23 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 31 03:12:24 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 31 03:12:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 03:12:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 03:12:24 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 03:12:24 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 03:12:25 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 03:12:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:25 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 03:12:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:28 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 03:12:28 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 03:12:29 np0005603663 systemd-logind[793]: New session 38 of user zuul.
Jan 31 03:12:29 np0005603663 systemd[1]: Started Session 38 of User zuul.
Jan 31 03:12:29 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 03:12:29 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 03:12:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:30 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 03:12:30 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 03:12:30 np0005603663 python3.9[110377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:12:31 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 31 03:12:31 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 31 03:12:31 np0005603663 python3.9[110531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:12:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:12:31
Jan 31 03:12:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:12:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:12:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'vms', 'backups']
Jan 31 03:12:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:12:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:32 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 31 03:12:32 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 31 03:12:32 np0005603663 python3.9[110687]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:12:32 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 03:12:32 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:12:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:12:33 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 03:12:33 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 03:12:33 np0005603663 python3.9[110771]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:12:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:34 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 31 03:12:34 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 31 03:12:34 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 03:12:34 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 03:12:35 np0005603663 python3.9[110924]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:12:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:35 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 03:12:35 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 03:12:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:36 np0005603663 python3.9[111119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:12:36 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 03:12:36 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 03:12:36 np0005603663 python3.9[111271]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:12:37 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 03:12:37 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 03:12:37 np0005603663 python3.9[111436]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:12:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:37 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 03:12:37 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 03:12:38 np0005603663 python3.9[111514]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:12:38 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 31 03:12:38 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 31 03:12:38 np0005603663 python3.9[111666]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:12:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 03:12:38 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 03:12:39 np0005603663 python3.9[111744]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:12:39 np0005603663 python3.9[111896]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:12:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:40 np0005603663 python3.9[112048]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:12:40 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 03:12:40 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 03:12:40 np0005603663 python3.9[112200]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:12:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 03:12:40 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 03:12:41 np0005603663 python3.9[112352]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:12:41 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 03:12:41 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 03:12:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:41 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 03:12:41 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 03:12:42 np0005603663 python3.9[112504]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:12:43 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 03:12:43 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 03:12:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:43 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 03:12:43 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 03:12:44 np0005603663 python3.9[112657]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:12:44 np0005603663 python3.9[112811]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:12:44 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 03:12:44 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 03:12:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:45 np0005603663 python3.9[112963]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:12:45 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 03:12:45 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 03:12:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:45 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 31 03:12:45 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 31 03:12:46 np0005603663 python3.9[113115]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:12:46 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 03:12:46 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 03:12:47 np0005603663 python3.9[113268]: ansible-service_facts Invoked
Jan 31 03:12:47 np0005603663 network[113285]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:12:47 np0005603663 network[113286]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:12:47 np0005603663 network[113287]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:12:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 03:12:47 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 03:12:48 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 03:12:48 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 03:12:48 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 03:12:48 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 03:12:49 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 03:12:49 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 03:12:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:50 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 03:12:50 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 03:12:51 np0005603663 python3.9[113739]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:12:51 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 03:12:51 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 03:12:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:52 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 03:12:52 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 03:12:52 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 03:12:52 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 03:12:53 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 31 03:12:53 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 31 03:12:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:53 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 31 03:12:53 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 31 03:12:53 np0005603663 python3.9[113892]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 03:12:54 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 31 03:12:54 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 31 03:12:54 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 03:12:54 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 03:12:55 np0005603663 python3.9[114044]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:12:55 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 03:12:55 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 03:12:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:12:55 np0005603663 python3.9[114122]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:12:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:56 np0005603663 python3.9[114274]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:12:56 np0005603663 python3.9[114352]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:12:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 03:12:57 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 03:12:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:58 np0005603663 python3.9[114504]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:12:58 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 03:12:58 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 03:12:58 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 03:12:58 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 03:12:59 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 03:12:59 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 03:12:59 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 03:12:59 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 03:12:59 np0005603663 python3.9[114656]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:12:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:12:59 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 03:12:59 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 03:13:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:00 np0005603663 python3.9[114740]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:13:00 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 03:13:00 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 03:13:01 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 31 03:13:01 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 31 03:13:01 np0005603663 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 03:13:01 np0005603663 systemd[1]: session-38.scope: Consumed 20.759s CPU time.
Jan 31 03:13:01 np0005603663 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Jan 31 03:13:01 np0005603663 systemd-logind[793]: Removed session 38.
Jan 31 03:13:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:02 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 31 03:13:02 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 31 03:13:03 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 03:13:03 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 03:13:03 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 03:13:03 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 03:13:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:04 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 03:13:04 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 03:13:04 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 03:13:04 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.666167) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184666217, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7316, "num_deletes": 251, "total_data_size": 10034784, "memory_usage": 10193880, "flush_reason": "Manual Compaction"}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184703856, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8012393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7459, "table_properties": {"data_size": 7984438, "index_size": 18496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 77694, "raw_average_key_size": 23, "raw_value_size": 7919647, "raw_average_value_size": 2378, "num_data_blocks": 812, "num_entries": 3330, "num_filter_entries": 3330, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846773, "oldest_key_time": 1769846773, "file_creation_time": 1769847184, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 37773 microseconds, and 9846 cpu microseconds.
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.703931) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8012393 bytes OK
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.703961) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.705591) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.705618) EVENT_LOG_v1 {"time_micros": 1769847184705611, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.705661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10002761, prev total WAL file size 10002761, number of live WAL files 2.
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.707371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7824KB) 13(58KB) 8(1944B)]
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184707461, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8074297, "oldest_snapshot_seqno": -1}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3156 keys, 8027119 bytes, temperature: kUnknown
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184751635, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8027119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7999599, "index_size": 18514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 76134, "raw_average_key_size": 24, "raw_value_size": 7936167, "raw_average_value_size": 2514, "num_data_blocks": 814, "num_entries": 3156, "num_filter_entries": 3156, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847184, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.751885) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8027119 bytes
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.753056) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.4 rd, 181.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.7, 0.0 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3445, records dropped: 289 output_compression: NoCompression
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.753089) EVENT_LOG_v1 {"time_micros": 1769847184753074, "job": 4, "event": "compaction_finished", "compaction_time_micros": 44257, "compaction_time_cpu_micros": 14795, "output_level": 6, "num_output_files": 1, "total_output_size": 8027119, "num_input_records": 3445, "num_output_records": 3156, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184754675, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184754772, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184754823, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 03:13:04 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.707223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 03:13:05 np0005603663 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 03:13:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:05 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 03:13:05 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 03:13:06 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 03:13:06 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 03:13:07 np0005603663 systemd-logind[793]: New session 39 of user zuul.
Jan 31 03:13:07 np0005603663 systemd[1]: Started Session 39 of User zuul.
Jan 31 03:13:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:07 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 03:13:07 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 03:13:08 np0005603663 python3.9[114923]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:09 np0005603663 python3.9[115075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:09 np0005603663 python3.9[115153]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:10 np0005603663 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 03:13:10 np0005603663 systemd[1]: session-39.scope: Consumed 1.325s CPU time.
Jan 31 03:13:10 np0005603663 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Jan 31 03:13:10 np0005603663 systemd-logind[793]: Removed session 39.
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:13:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.754198841 +0000 UTC m=+0.048445462 container create 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:10 np0005603663 systemd[1]: Started libpod-conmon-0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0.scope.
Jan 31 03:13:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.830053422 +0000 UTC m=+0.124300093 container init 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.734782706 +0000 UTC m=+0.029029367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.836916885 +0000 UTC m=+0.131163496 container start 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.839709383 +0000 UTC m=+0.133956034 container attach 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:13:10 np0005603663 goofy_chaum[115337]: 167 167
Jan 31 03:13:10 np0005603663 systemd[1]: libpod-0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0.scope: Deactivated successfully.
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.842514502 +0000 UTC m=+0.136761113 container died 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:13:10 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a83a98542081ae0c3fa5247d7483848fc73d192f7d6a109a0fb694b7c9c0aa88-merged.mount: Deactivated successfully.
Jan 31 03:13:10 np0005603663 podman[115321]: 2026-01-31 08:13:10.889017548 +0000 UTC m=+0.183264179 container remove 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:13:10 np0005603663 systemd[1]: libpod-conmon-0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0.scope: Deactivated successfully.
Jan 31 03:13:10 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 03:13:10 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 03:13:10 np0005603663 podman[115361]: 2026-01-31 08:13:10.987781363 +0000 UTC m=+0.029075808 container create cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 31 03:13:11 np0005603663 systemd[1]: Started libpod-conmon-cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8.scope.
Jan 31 03:13:11 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:13:11 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:13:11 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:13:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:13:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:11 np0005603663 podman[115361]: 2026-01-31 08:13:11.061702949 +0000 UTC m=+0.102997454 container init cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:11 np0005603663 podman[115361]: 2026-01-31 08:13:11.067598815 +0000 UTC m=+0.108893300 container start cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:13:11 np0005603663 podman[115361]: 2026-01-31 08:13:11.070514807 +0000 UTC m=+0.111809262 container attach cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:13:11 np0005603663 podman[115361]: 2026-01-31 08:13:10.974979283 +0000 UTC m=+0.016273748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:13:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 03:13:11 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 03:13:11 np0005603663 interesting_pare[115377]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:13:11 np0005603663 interesting_pare[115377]: --> All data devices are unavailable
Jan 31 03:13:11 np0005603663 systemd[1]: libpod-cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8.scope: Deactivated successfully.
Jan 31 03:13:11 np0005603663 podman[115361]: 2026-01-31 08:13:11.50385324 +0000 UTC m=+0.545147745 container died cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 03:13:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8-merged.mount: Deactivated successfully.
Jan 31 03:13:11 np0005603663 podman[115361]: 2026-01-31 08:13:11.554382189 +0000 UTC m=+0.595676674 container remove cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:13:11 np0005603663 systemd[1]: libpod-conmon-cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8.scope: Deactivated successfully.
Jan 31 03:13:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:11 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 03:13:11 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 03:13:11 np0005603663 podman[115471]: 2026-01-31 08:13:11.96206171 +0000 UTC m=+0.040318633 container create 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:13:11 np0005603663 systemd[1]: Started libpod-conmon-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope.
Jan 31 03:13:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:13:12 np0005603663 podman[115471]: 2026-01-31 08:13:12.033332443 +0000 UTC m=+0.111589386 container init 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:13:12 np0005603663 podman[115471]: 2026-01-31 08:13:11.941887424 +0000 UTC m=+0.020144377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:13:12 np0005603663 podman[115471]: 2026-01-31 08:13:12.038834107 +0000 UTC m=+0.117091040 container start 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:13:12 np0005603663 podman[115471]: 2026-01-31 08:13:12.041745379 +0000 UTC m=+0.120002312 container attach 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:13:12 np0005603663 awesome_proskuriakova[115487]: 167 167
Jan 31 03:13:12 np0005603663 systemd[1]: libpod-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope: Deactivated successfully.
Jan 31 03:13:12 np0005603663 conmon[115487]: conmon 935374ad7755ab8f3f64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope/container/memory.events
Jan 31 03:13:12 np0005603663 podman[115471]: 2026-01-31 08:13:12.044125656 +0000 UTC m=+0.122382579 container died 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:13:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-587470afab33dd01dbc16c5351765eb3bf1d64b85f536d6470c5708a0ef6d7e9-merged.mount: Deactivated successfully.
Jan 31 03:13:12 np0005603663 podman[115471]: 2026-01-31 08:13:12.072935445 +0000 UTC m=+0.151192368 container remove 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:13:12 np0005603663 systemd[1]: libpod-conmon-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope: Deactivated successfully.
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.192347959 +0000 UTC m=+0.040968281 container create 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:13:12 np0005603663 systemd[1]: Started libpod-conmon-7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a.scope.
Jan 31 03:13:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:13:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.255594656 +0000 UTC m=+0.104215028 container init 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.264076874 +0000 UTC m=+0.112697196 container start 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.169621881 +0000 UTC m=+0.018242233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.267276624 +0000 UTC m=+0.115896976 container attach 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]: {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:    "0": [
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:        {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "devices": [
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "/dev/loop3"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            ],
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_name": "ceph_lv0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_size": "21470642176",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "name": "ceph_lv0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "tags": {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cluster_name": "ceph",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.crush_device_class": "",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.encrypted": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.objectstore": "bluestore",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osd_id": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.type": "block",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.vdo": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.with_tpm": "0"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            },
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "type": "block",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "vg_name": "ceph_vg0"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:        }
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:    ],
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:    "1": [
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:        {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "devices": [
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "/dev/loop4"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            ],
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_name": "ceph_lv1",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_size": "21470642176",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "name": "ceph_lv1",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "tags": {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cluster_name": "ceph",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.crush_device_class": "",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.encrypted": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.objectstore": "bluestore",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osd_id": "1",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.type": "block",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.vdo": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.with_tpm": "0"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            },
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "type": "block",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "vg_name": "ceph_vg1"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:        }
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:    ],
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:    "2": [
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:        {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "devices": [
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "/dev/loop5"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            ],
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_name": "ceph_lv2",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_size": "21470642176",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "name": "ceph_lv2",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "tags": {
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.cluster_name": "ceph",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.crush_device_class": "",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.encrypted": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.objectstore": "bluestore",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osd_id": "2",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.type": "block",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.vdo": "0",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:                "ceph.with_tpm": "0"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            },
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "type": "block",
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:            "vg_name": "ceph_vg2"
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:        }
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]:    ]
Jan 31 03:13:12 np0005603663 zealous_wilson[115527]: }
Jan 31 03:13:12 np0005603663 systemd[1]: libpod-7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a.scope: Deactivated successfully.
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.537911997 +0000 UTC m=+0.386532319 container died 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213-merged.mount: Deactivated successfully.
Jan 31 03:13:12 np0005603663 podman[115510]: 2026-01-31 08:13:12.573858467 +0000 UTC m=+0.422478789 container remove 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:13:12 np0005603663 systemd[1]: libpod-conmon-7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a.scope: Deactivated successfully.
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.062797921 +0000 UTC m=+0.060321945 container create 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:13:13 np0005603663 systemd[1]: Started libpod-conmon-0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb.scope.
Jan 31 03:13:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.037938123 +0000 UTC m=+0.035462157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.135236106 +0000 UTC m=+0.132760150 container init 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.144369403 +0000 UTC m=+0.141893447 container start 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.147702246 +0000 UTC m=+0.145226350 container attach 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:13 np0005603663 happy_cray[115626]: 167 167
Jan 31 03:13:13 np0005603663 systemd[1]: libpod-0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb.scope: Deactivated successfully.
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.149875047 +0000 UTC m=+0.147399051 container died 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:13:13 np0005603663 systemd[1]: var-lib-containers-storage-overlay-18db69c5f9bcfdc5fabcd1453d79a413435e795bfe8f8c8daa11ee99af4a014e-merged.mount: Deactivated successfully.
Jan 31 03:13:13 np0005603663 podman[115610]: 2026-01-31 08:13:13.194393438 +0000 UTC m=+0.191917452 container remove 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:13:13 np0005603663 systemd[1]: libpod-conmon-0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb.scope: Deactivated successfully.
Jan 31 03:13:13 np0005603663 podman[115651]: 2026-01-31 08:13:13.396209427 +0000 UTC m=+0.092234512 container create 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:13:13 np0005603663 podman[115651]: 2026-01-31 08:13:13.339803093 +0000 UTC m=+0.035828268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:13:13 np0005603663 systemd[1]: Started libpod-conmon-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope.
Jan 31 03:13:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:13:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:13 np0005603663 podman[115651]: 2026-01-31 08:13:13.523484913 +0000 UTC m=+0.219510038 container init 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:13 np0005603663 podman[115651]: 2026-01-31 08:13:13.531905749 +0000 UTC m=+0.227930864 container start 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:13:13 np0005603663 podman[115651]: 2026-01-31 08:13:13.535674665 +0000 UTC m=+0.231699840 container attach 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:13:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:14 np0005603663 lvm[115747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:13:14 np0005603663 lvm[115746]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:13:14 np0005603663 lvm[115747]: VG ceph_vg1 finished
Jan 31 03:13:14 np0005603663 lvm[115746]: VG ceph_vg0 finished
Jan 31 03:13:14 np0005603663 lvm[115749]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:13:14 np0005603663 lvm[115749]: VG ceph_vg2 finished
Jan 31 03:13:14 np0005603663 festive_rhodes[115668]: {}
Jan 31 03:13:14 np0005603663 systemd[1]: libpod-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope: Deactivated successfully.
Jan 31 03:13:14 np0005603663 podman[115651]: 2026-01-31 08:13:14.318010162 +0000 UTC m=+1.014035257 container died 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:13:14 np0005603663 systemd[1]: libpod-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope: Consumed 1.043s CPU time.
Jan 31 03:13:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750-merged.mount: Deactivated successfully.
Jan 31 03:13:14 np0005603663 podman[115651]: 2026-01-31 08:13:14.368576182 +0000 UTC m=+1.064601287 container remove 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:13:14 np0005603663 systemd[1]: libpod-conmon-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope: Deactivated successfully.
Jan 31 03:13:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:13:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:13:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:13:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:13:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:13:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:13:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:15 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 03:13:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:15 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 03:13:17 np0005603663 systemd-logind[793]: New session 40 of user zuul.
Jan 31 03:13:17 np0005603663 systemd[1]: Started Session 40 of User zuul.
Jan 31 03:13:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:17 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 03:13:17 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 03:13:18 np0005603663 python3.9[115943]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:13:19 np0005603663 python3.9[116099]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:19 np0005603663 python3.9[116274]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:20 np0005603663 python3.9[116352]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.adm08e91 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:20 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 31 03:13:20 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 31 03:13:21 np0005603663 python3.9[116504]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:21 np0005603663 python3.9[116582]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ffomknku recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:22 np0005603663 python3.9[116734]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:13:23 np0005603663 python3.9[116886]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:23 np0005603663 python3.9[116964]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:13:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:23 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 03:13:24 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 03:13:24 np0005603663 python3.9[117116]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 03:13:24 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 03:13:24 np0005603663 python3.9[117194]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:13:25 np0005603663 python3.9[117346]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:25 np0005603663 python3.9[117498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:25 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 03:13:26 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 03:13:26 np0005603663 python3.9[117576]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:26 np0005603663 python3.9[117728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:27 np0005603663 python3.9[117806]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:28 np0005603663 python3.9[117958]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:13:28 np0005603663 systemd[1]: Reloading.
Jan 31 03:13:28 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:13:28 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:13:29 np0005603663 python3.9[118147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:29 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 03:13:29 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 03:13:29 np0005603663 python3.9[118225]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:30 np0005603663 python3.9[118377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:30 np0005603663 python3.9[118455]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:31 np0005603663 python3.9[118607]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:13:31 np0005603663 systemd[1]: Reloading.
Jan 31 03:13:31 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:13:31 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:13:31 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 03:13:31 np0005603663 systemd[1]: Starting Create netns directory...
Jan 31 03:13:31 np0005603663 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 03:13:31 np0005603663 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 03:13:31 np0005603663 systemd[1]: Finished Create netns directory.
Jan 31 03:13:31 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 03:13:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:13:31
Jan 31 03:13:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:13:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:13:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'backups', '.rgw.root', 'default.rgw.log']
Jan 31 03:13:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:13:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:32 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 03:13:32 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 03:13:32 np0005603663 python3.9[118798]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:13:32 np0005603663 network[118815]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:13:32 np0005603663 network[118816]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:13:32 np0005603663 network[118817]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:13:32 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 03:13:32 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:13:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:13:33 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 03:13:33 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 03:13:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:35 np0005603663 python3.9[119079]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:35 np0005603663 python3.9[119157]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:35 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 03:13:35 np0005603663 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 03:13:36 np0005603663 python3.9[119309]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:36 np0005603663 python3.9[119461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:37 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 03:13:37 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 03:13:37 np0005603663 python3.9[119539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:38 np0005603663 python3.9[119691]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 03:13:38 np0005603663 systemd[1]: Starting Time & Date Service...
Jan 31 03:13:38 np0005603663 systemd[1]: Started Time & Date Service.
Jan 31 03:13:38 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 03:13:38 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 03:13:38 np0005603663 python3.9[119847]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:39 np0005603663 python3.9[119999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:40 np0005603663 python3.9[120077]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:40 np0005603663 python3.9[120229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:40 np0005603663 python3.9[120307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wqkwzi68 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:41 np0005603663 python3.9[120459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:41 np0005603663 python3.9[120537]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:42 np0005603663 python3.9[120689]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:13:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 03:13:43 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 03:13:43 np0005603663 python3[120842]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 03:13:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:43 np0005603663 python3.9[120994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:44 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 03:13:44 np0005603663 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 03:13:44 np0005603663 python3.9[121072]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:44 np0005603663 python3.9[121224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:45 np0005603663 python3.9[121349]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847224.5311546-308-278075843057044/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:46 np0005603663 python3.9[121501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:46 np0005603663 python3.9[121579]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:47 np0005603663 python3.9[121731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:47 np0005603663 python3.9[121809]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:48 np0005603663 python3.9[121961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:13:48 np0005603663 python3.9[122039]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:49 np0005603663 python3.9[122191]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:13:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:50 np0005603663 python3.9[122346]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:50 np0005603663 python3.9[122498]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:51 np0005603663 python3.9[122650]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:13:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:52 np0005603663 python3.9[122802]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 03:13:52 np0005603663 python3.9[122954]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 03:13:53 np0005603663 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 03:13:53 np0005603663 systemd[1]: session-40.scope: Consumed 25.095s CPU time.
Jan 31 03:13:53 np0005603663 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Jan 31 03:13:53 np0005603663 systemd-logind[793]: Removed session 40.
Jan 31 03:13:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:13:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:13:58 np0005603663 systemd-logind[793]: New session 41 of user zuul.
Jan 31 03:13:58 np0005603663 systemd[1]: Started Session 41 of User zuul.
Jan 31 03:13:58 np0005603663 python3.9[123134]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 03:13:59 np0005603663 python3.9[123286]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:13:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:00 np0005603663 python3.9[123440]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 03:14:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:00 np0005603663 python3.9[123592]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2jw423ga follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:01 np0005603663 python3.9[123717]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2jw423ga mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847240.4163513-44-166055283139032/.source.2jw423ga _original_basename=._b2qh4io follow=False checksum=085088cdd6eb94656409168e9e8a2a7ec564f206 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:02 np0005603663 python3.9[123869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:14:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:03 np0005603663 python3.9[124021]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWE2JVgZg7/u8eKJOhyXjs2p2Qt39hyygdPIhluejh1YW6dcdEylP4WBj6s+q3E0jylhkLknf3rSZ3V/k+1w4fdSUak8G4nLiV+h7jI0m37zoSEXpQABHGJkpgi2eMs0YNEF9ZbgIO31d28SspBpNxFqovrMK9sOzJD3jRaR2TV2FGV4csI4Je0LNdEV2NmeRljWtF7PlqQKs424iGvqmWC0B3yHCfBTNvXWNKzGR1N9odg9DQrU9iQl+1eRKkj6BTvJgzpUrsqny5n8vohkDGBUxN/PXOEp7pqhuJUPSphsqmLwQwrLfwDu7A7dJJfZkVKkpzZyD6doTBm0NvOOS1P7M8/iclLU1KEYLp51WWXc+cX67skjn1vfDJa7CGV5YlXA3q5QP5xqR6eDbptMG7KpRBt6sSG7A44KIXdmzbWGFuBJYi0sjVIDfXPkfJOcwxwUzMotpbCYCDOV94CS6XESh8ZKogwpuB8qVCTqZEJz/qxAkpdL1xxLZ6iM3SA2k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBdV4ImCUSap74vh7n2NTRmfyoKbp4X6QTOOZaAU/4X4#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNKN9rH1fl1KXYyt+swOzNYmow6bIvU77b90jfMS4wXtyUATZdas4vlUZ46SayVV+s+nKQQloJFhgnR/5ots9Yc=#012 create=True mode=0644 path=/tmp/ansible.2jw423ga state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:03 np0005603663 python3.9[124173]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2jw423ga' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:14:04 np0005603663 python3.9[124327]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2jw423ga state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:04 np0005603663 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Jan 31 03:14:04 np0005603663 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 03:14:04 np0005603663 systemd[1]: session-41.scope: Consumed 4.257s CPU time.
Jan 31 03:14:04 np0005603663 systemd-logind[793]: Removed session 41.
Jan 31 03:14:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:08 np0005603663 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 03:14:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:10 np0005603663 systemd-logind[793]: New session 42 of user zuul.
Jan 31 03:14:10 np0005603663 systemd[1]: Started Session 42 of User zuul.
Jan 31 03:14:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:11 np0005603663 python3.9[124508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:14:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:12 np0005603663 python3.9[124664]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 03:14:13 np0005603663 python3.9[124818]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:14:13 np0005603663 python3.9[124971]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:14:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:14 np0005603663 python3.9[125124]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:14:15 np0005603663 python3.9[125386]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.39841622 +0000 UTC m=+0.036251667 container create 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:15 np0005603663 systemd[1]: Started libpod-conmon-325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd.scope.
Jan 31 03:14:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.473770868 +0000 UTC m=+0.111606335 container init 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.479899392 +0000 UTC m=+0.117734869 container start 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.383139063 +0000 UTC m=+0.020974550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.483617621 +0000 UTC m=+0.121453058 container attach 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:14:15 np0005603663 sweet_payne[125460]: 167 167
Jan 31 03:14:15 np0005603663 systemd[1]: libpod-325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.487936836 +0000 UTC m=+0.125772303 container died 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:14:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b17f9cbbb82498831d92474017ee0d6c6981d4c69d74ed98b09258bbf1c39a23-merged.mount: Deactivated successfully.
Jan 31 03:14:15 np0005603663 podman[125444]: 2026-01-31 08:14:15.523878284 +0000 UTC m=+0.161713721 container remove 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:14:15 np0005603663 systemd[1]: libpod-conmon-325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603663 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Jan 31 03:14:15 np0005603663 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603663 systemd[1]: session-42.scope: Consumed 3.347s CPU time.
Jan 31 03:14:15 np0005603663 systemd-logind[793]: Removed session 42.
Jan 31 03:14:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:15 np0005603663 podman[125484]: 2026-01-31 08:14:15.647404036 +0000 UTC m=+0.041560919 container create 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:14:15 np0005603663 systemd[1]: Started libpod-conmon-83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd.scope.
Jan 31 03:14:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:14:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603663 podman[125484]: 2026-01-31 08:14:15.626348945 +0000 UTC m=+0.020505838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:14:15 np0005603663 podman[125484]: 2026-01-31 08:14:15.738561995 +0000 UTC m=+0.132718898 container init 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:14:15 np0005603663 podman[125484]: 2026-01-31 08:14:15.750231846 +0000 UTC m=+0.144388709 container start 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:15 np0005603663 podman[125484]: 2026-01-31 08:14:15.753807862 +0000 UTC m=+0.147964765 container attach 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:14:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:14:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:14:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:14:16 np0005603663 busy_mccarthy[125501]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:14:16 np0005603663 busy_mccarthy[125501]: --> All data devices are unavailable
Jan 31 03:14:16 np0005603663 systemd[1]: libpod-83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd.scope: Deactivated successfully.
Jan 31 03:14:16 np0005603663 podman[125484]: 2026-01-31 08:14:16.157299555 +0000 UTC m=+0.551456458 container died 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:14:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d-merged.mount: Deactivated successfully.
Jan 31 03:14:16 np0005603663 podman[125484]: 2026-01-31 08:14:16.189958665 +0000 UTC m=+0.584115568 container remove 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:14:16 np0005603663 systemd[1]: libpod-conmon-83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd.scope: Deactivated successfully.
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.635229042 +0000 UTC m=+0.043625504 container create 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:14:16 np0005603663 systemd[1]: Started libpod-conmon-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope.
Jan 31 03:14:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.701857887 +0000 UTC m=+0.110254369 container init 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.706701286 +0000 UTC m=+0.115097738 container start 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:14:16 np0005603663 naughty_herschel[125610]: 167 167
Jan 31 03:14:16 np0005603663 systemd[1]: libpod-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope: Deactivated successfully.
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.709925962 +0000 UTC m=+0.118322444 container attach 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.615178537 +0000 UTC m=+0.023575049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:14:16 np0005603663 conmon[125610]: conmon 81b385370995ab239c14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope/container/memory.events
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.710861897 +0000 UTC m=+0.119258389 container died 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:14:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fe868524c4dba6a4d346e7d62674edd468cb9506f80bbcac96039bc6f5757263-merged.mount: Deactivated successfully.
Jan 31 03:14:16 np0005603663 podman[125594]: 2026-01-31 08:14:16.742194842 +0000 UTC m=+0.150591294 container remove 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 03:14:16 np0005603663 systemd[1]: libpod-conmon-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope: Deactivated successfully.
Jan 31 03:14:16 np0005603663 podman[125634]: 2026-01-31 08:14:16.855692247 +0000 UTC m=+0.034911201 container create b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:14:16 np0005603663 systemd[1]: Started libpod-conmon-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope.
Jan 31 03:14:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:14:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:16 np0005603663 podman[125634]: 2026-01-31 08:14:16.919860537 +0000 UTC m=+0.099079491 container init b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:14:16 np0005603663 podman[125634]: 2026-01-31 08:14:16.924375278 +0000 UTC m=+0.103594192 container start b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:14:16 np0005603663 podman[125634]: 2026-01-31 08:14:16.927119511 +0000 UTC m=+0.106338465 container attach b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:14:16 np0005603663 podman[125634]: 2026-01-31 08:14:16.839546387 +0000 UTC m=+0.018765331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:14:17 np0005603663 boring_carson[125650]: {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:    "0": [
Jan 31 03:14:17 np0005603663 boring_carson[125650]:        {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "devices": [
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "/dev/loop3"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            ],
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_name": "ceph_lv0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_size": "21470642176",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "name": "ceph_lv0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "tags": {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cluster_name": "ceph",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.crush_device_class": "",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.encrypted": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.objectstore": "bluestore",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osd_id": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.type": "block",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.vdo": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.with_tpm": "0"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            },
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "type": "block",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "vg_name": "ceph_vg0"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:        }
Jan 31 03:14:17 np0005603663 boring_carson[125650]:    ],
Jan 31 03:14:17 np0005603663 boring_carson[125650]:    "1": [
Jan 31 03:14:17 np0005603663 boring_carson[125650]:        {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "devices": [
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "/dev/loop4"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            ],
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_name": "ceph_lv1",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_size": "21470642176",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "name": "ceph_lv1",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "tags": {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cluster_name": "ceph",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.crush_device_class": "",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.encrypted": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.objectstore": "bluestore",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osd_id": "1",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.type": "block",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.vdo": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.with_tpm": "0"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            },
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "type": "block",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "vg_name": "ceph_vg1"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:        }
Jan 31 03:14:17 np0005603663 boring_carson[125650]:    ],
Jan 31 03:14:17 np0005603663 boring_carson[125650]:    "2": [
Jan 31 03:14:17 np0005603663 boring_carson[125650]:        {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "devices": [
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "/dev/loop5"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            ],
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_name": "ceph_lv2",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_size": "21470642176",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "name": "ceph_lv2",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "tags": {
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.cluster_name": "ceph",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.crush_device_class": "",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.encrypted": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.objectstore": "bluestore",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osd_id": "2",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.type": "block",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.vdo": "0",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:                "ceph.with_tpm": "0"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            },
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "type": "block",
Jan 31 03:14:17 np0005603663 boring_carson[125650]:            "vg_name": "ceph_vg2"
Jan 31 03:14:17 np0005603663 boring_carson[125650]:        }
Jan 31 03:14:17 np0005603663 boring_carson[125650]:    ]
Jan 31 03:14:17 np0005603663 boring_carson[125650]: }
Jan 31 03:14:17 np0005603663 systemd[1]: libpod-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope: Deactivated successfully.
Jan 31 03:14:17 np0005603663 conmon[125650]: conmon b2532c8b340c531c8710 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope/container/memory.events
Jan 31 03:14:17 np0005603663 podman[125634]: 2026-01-31 08:14:17.193343456 +0000 UTC m=+0.372562380 container died b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:14:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60-merged.mount: Deactivated successfully.
Jan 31 03:14:17 np0005603663 podman[125634]: 2026-01-31 08:14:17.233690631 +0000 UTC m=+0.412909565 container remove b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:14:17 np0005603663 systemd[1]: libpod-conmon-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope: Deactivated successfully.
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.606528598 +0000 UTC m=+0.031764638 container create b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:14:17 np0005603663 systemd[1]: Started libpod-conmon-b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648.scope.
Jan 31 03:14:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.656741686 +0000 UTC m=+0.081977756 container init b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.661116653 +0000 UTC m=+0.086352693 container start b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:14:17 np0005603663 funny_lamarr[125749]: 167 167
Jan 31 03:14:17 np0005603663 systemd[1]: libpod-b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648.scope: Deactivated successfully.
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.66516274 +0000 UTC m=+0.090398800 container attach b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.665626173 +0000 UTC m=+0.090862233 container died b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:14:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a5724626db40ca89fdea9d1fbdec7cfe2ab39221004555338f0f50d80cf7d5ea-merged.mount: Deactivated successfully.
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.592005661 +0000 UTC m=+0.017241721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:14:17 np0005603663 podman[125733]: 2026-01-31 08:14:17.693360942 +0000 UTC m=+0.118596982 container remove b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:14:17 np0005603663 systemd[1]: libpod-conmon-b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648.scope: Deactivated successfully.
Jan 31 03:14:17 np0005603663 podman[125771]: 2026-01-31 08:14:17.82988437 +0000 UTC m=+0.053242570 container create 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:14:17 np0005603663 systemd[1]: Started libpod-conmon-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope.
Jan 31 03:14:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:14:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:17 np0005603663 podman[125771]: 2026-01-31 08:14:17.896206088 +0000 UTC m=+0.119564348 container init 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:17 np0005603663 podman[125771]: 2026-01-31 08:14:17.805717376 +0000 UTC m=+0.029075676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:14:17 np0005603663 podman[125771]: 2026-01-31 08:14:17.901463368 +0000 UTC m=+0.124821568 container start 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:14:17 np0005603663 podman[125771]: 2026-01-31 08:14:17.904316274 +0000 UTC m=+0.127674474 container attach 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:18 np0005603663 lvm[125865]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:14:18 np0005603663 lvm[125865]: VG ceph_vg0 finished
Jan 31 03:14:18 np0005603663 lvm[125866]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:14:18 np0005603663 lvm[125866]: VG ceph_vg1 finished
Jan 31 03:14:18 np0005603663 lvm[125868]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:14:18 np0005603663 lvm[125868]: VG ceph_vg2 finished
Jan 31 03:14:18 np0005603663 distracted_fermi[125787]: {}
Jan 31 03:14:18 np0005603663 systemd[1]: libpod-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope: Deactivated successfully.
Jan 31 03:14:18 np0005603663 podman[125771]: 2026-01-31 08:14:18.623541812 +0000 UTC m=+0.846900032 container died 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:14:18 np0005603663 systemd[1]: libpod-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope: Consumed 1.017s CPU time.
Jan 31 03:14:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc-merged.mount: Deactivated successfully.
Jan 31 03:14:18 np0005603663 podman[125771]: 2026-01-31 08:14:18.666313092 +0000 UTC m=+0.889671282 container remove 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:14:18 np0005603663 systemd[1]: libpod-conmon-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope: Deactivated successfully.
Jan 31 03:14:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:14:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:14:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:14:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:14:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:14:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:14:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:21 np0005603663 systemd-logind[793]: New session 43 of user zuul.
Jan 31 03:14:21 np0005603663 systemd[1]: Started Session 43 of User zuul.
Jan 31 03:14:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:22 np0005603663 python3.9[126060]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:14:23 np0005603663 python3.9[126216]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:14:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:23 np0005603663 python3.9[126300]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 03:14:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:25 np0005603663 python3.9[126451]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:14:26 np0005603663 python3.9[126602]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 03:14:27 np0005603663 python3.9[126752]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:14:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:28 np0005603663 python3.9[126902]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:14:28 np0005603663 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 03:14:28 np0005603663 systemd[1]: session-43.scope: Consumed 5.050s CPU time.
Jan 31 03:14:28 np0005603663 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Jan 31 03:14:28 np0005603663 systemd-logind[793]: Removed session 43.
Jan 31 03:14:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:30 np0005603663 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 03:14:30 np0005603663 systemd[1]: session-18.scope: Consumed 1min 28.530s CPU time.
Jan 31 03:14:30 np0005603663 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Jan 31 03:14:30 np0005603663 systemd-logind[793]: Removed session 18.
Jan 31 03:14:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:14:31
Jan 31 03:14:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:14:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:14:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 03:14:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:14:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:14:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:14:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:34 np0005603663 systemd-logind[793]: New session 44 of user zuul.
Jan 31 03:14:34 np0005603663 systemd[1]: Started Session 44 of User zuul.
Jan 31 03:14:35 np0005603663 python3.9[127080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:14:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:36 np0005603663 python3.9[127236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:37 np0005603663 python3.9[127388]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:38 np0005603663 python3.9[127540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:38 np0005603663 python3.9[127663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847277.7516167-60-173834942388739/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fdbed11d72702d0c28585d2f3fa0ede8c1d99a43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:39 np0005603663 python3.9[127815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:39 np0005603663 python3.9[127938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847279.1072574-60-111341352915368/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7181925ca4d0c23701428eb3b5989ad45810d4dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:40 np0005603663 python3.9[128090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:41 np0005603663 python3.9[128213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847280.1287086-60-180092125624263/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=fce23680cf7dc4c27547710ac9265ef124ceb373 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:41 np0005603663 python3.9[128365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:42 np0005603663 python3.9[128517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:42 np0005603663 python3.9[128669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:14:43 np0005603663 python3.9[128792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847282.385077-119-225058434797769/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6120145c13ba3d014fbf8fdeb4b2ab094d53d173 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:43 np0005603663 python3.9[128944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:44 np0005603663 python3.9[129067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847283.4642477-119-123690763768607/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d574945944269f1960401828db2762d2c018b87 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:44 np0005603663 python3.9[129219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:45 np0005603663 python3.9[129342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847284.53473-119-206437190671897/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e86b6cf4aae47e5bdcaaf716eff3983006637253 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:45 np0005603663 python3.9[129494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:46 np0005603663 python3.9[129646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:47 np0005603663 python3.9[129798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:47 np0005603663 python3.9[129921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847286.6491563-178-72713593394978/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=89a2dedd8167f548d7ca9fc4e2315de9d798066a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:48 np0005603663 python3.9[130073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:48 np0005603663 python3.9[130196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847287.7646909-178-163371699699038/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d574945944269f1960401828db2762d2c018b87 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:49 np0005603663 python3.9[130348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:49 np0005603663 python3.9[130471]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847288.919947-178-97212129338974/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c9c1b4eb3cf996ed403ca96208c923e78c00afee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:50 np0005603663 python3.9[130623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:51 np0005603663 python3.9[130775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:52 np0005603663 python3.9[130898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847291.115542-246-161429158329434/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:52 np0005603663 python3.9[131050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:53 np0005603663 python3.9[131202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:53 np0005603663 python3.9[131325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847292.9681025-270-67096905926248/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:54 np0005603663 python3.9[131477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:55 np0005603663 python3.9[131629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:55 np0005603663 python3.9[131752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847294.6906905-294-271771990330542/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:14:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:56 np0005603663 python3.9[131904]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:56 np0005603663 python3.9[132056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:57 np0005603663 python3.9[132179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847296.355829-318-165895419332194/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:14:57 np0005603663 python3.9[132331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:58 np0005603663 python3.9[132483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:14:59 np0005603663 python3.9[132606]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847298.127833-342-261183602427854/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:14:59 np0005603663 python3.9[132758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:14:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:00 np0005603663 python3.9[132910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:00 np0005603663 python3.9[133033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847299.8365793-366-105837305318786/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:01 np0005603663 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 03:15:01 np0005603663 systemd[1]: session-44.scope: Consumed 20.210s CPU time.
Jan 31 03:15:01 np0005603663 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Jan 31 03:15:01 np0005603663 systemd-logind[793]: Removed session 44.
Jan 31 03:15:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:06 np0005603663 systemd-logind[793]: New session 45 of user zuul.
Jan 31 03:15:06 np0005603663 systemd[1]: Started Session 45 of User zuul.
Jan 31 03:15:07 np0005603663 python3.9[133213]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:08 np0005603663 python3.9[133365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:08 np0005603663 python3.9[133488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847307.6373734-29-86429583385594/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5ead94c69bd1df72757f346af781128058784f3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:09 np0005603663 python3.9[133640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:09 np0005603663 python3.9[133763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847308.940014-29-58775629134600/.source.conf _original_basename=ceph.conf follow=False checksum=a00f0ea0dc22846dc13e7a7ab591bc83410e8962 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:10 np0005603663 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 03:15:10 np0005603663 systemd[1]: session-45.scope: Consumed 2.250s CPU time.
Jan 31 03:15:10 np0005603663 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Jan 31 03:15:10 np0005603663 systemd-logind[793]: Removed session 45.
Jan 31 03:15:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:15 np0005603663 systemd-logind[793]: New session 46 of user zuul.
Jan 31 03:15:15 np0005603663 systemd[1]: Started Session 46 of User zuul.
Jan 31 03:15:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:16 np0005603663 python3.9[133941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:15:17 np0005603663 python3.9[134097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:18 np0005603663 python3.9[134249]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:18 np0005603663 python3.9[134399]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:15:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:15:19 np0005603663 python3.9[134675]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.611418236 +0000 UTC m=+0.037547675 container create fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:15:19 np0005603663 systemd[1]: Started libpod-conmon-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope.
Jan 31 03:15:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.665454553 +0000 UTC m=+0.091584042 container init fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.672759864 +0000 UTC m=+0.098889323 container start fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:15:19 np0005603663 systemd[1]: libpod-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope: Deactivated successfully.
Jan 31 03:15:19 np0005603663 pedantic_margulis[134713]: 167 167
Jan 31 03:15:19 np0005603663 conmon[134713]: conmon fdee190c39bf0ebeeb73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope/container/memory.events
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.680290162 +0000 UTC m=+0.106419621 container attach fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.680753304 +0000 UTC m=+0.106882763 container died fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.591604861 +0000 UTC m=+0.017734320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:15:19 np0005603663 systemd[1]: var-lib-containers-storage-overlay-74d79edfabef9a25c70ef9e8fab480b3405247726a5a6ca0f401dd4d31651383-merged.mount: Deactivated successfully.
Jan 31 03:15:19 np0005603663 podman[134696]: 2026-01-31 08:15:19.715950603 +0000 UTC m=+0.142080032 container remove fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:15:19 np0005603663 systemd[1]: libpod-conmon-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope: Deactivated successfully.
Jan 31 03:15:19 np0005603663 podman[134739]: 2026-01-31 08:15:19.833686884 +0000 UTC m=+0.042640775 container create 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:15:19 np0005603663 systemd[1]: Started libpod-conmon-2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1.scope.
Jan 31 03:15:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:15:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:19 np0005603663 podman[134739]: 2026-01-31 08:15:19.81211647 +0000 UTC m=+0.021070381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:15:19 np0005603663 podman[134739]: 2026-01-31 08:15:19.977512123 +0000 UTC m=+0.186466024 container init 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:15:19 np0005603663 podman[134739]: 2026-01-31 08:15:19.982949672 +0000 UTC m=+0.191903553 container start 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:15:20 np0005603663 podman[134739]: 2026-01-31 08:15:20.075675345 +0000 UTC m=+0.284629246 container attach 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:15:20 np0005603663 wonderful_maxwell[134755]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:15:20 np0005603663 wonderful_maxwell[134755]: --> All data devices are unavailable
Jan 31 03:15:20 np0005603663 systemd[1]: libpod-2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1.scope: Deactivated successfully.
Jan 31 03:15:20 np0005603663 podman[134739]: 2026-01-31 08:15:20.366781617 +0000 UTC m=+0.575735508 container died 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:20 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 03:15:20 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d-merged.mount: Deactivated successfully.
Jan 31 03:15:20 np0005603663 podman[134739]: 2026-01-31 08:15:20.601422475 +0000 UTC m=+0.810376356 container remove 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:15:20 np0005603663 systemd[1]: libpod-conmon-2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1.scope: Deactivated successfully.
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.664661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320664688, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1382, "num_deletes": 250, "total_data_size": 2070695, "memory_usage": 2105992, "flush_reason": "Manual Compaction"}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320672055, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1215011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7460, "largest_seqno": 8841, "table_properties": {"data_size": 1210178, "index_size": 2101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13059, "raw_average_key_size": 20, "raw_value_size": 1199277, "raw_average_value_size": 1862, "num_data_blocks": 99, "num_entries": 644, "num_filter_entries": 644, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847185, "oldest_key_time": 1769847185, "file_creation_time": 1769847320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7482 microseconds, and 2914 cpu microseconds.
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.672129) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1215011 bytes OK
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.672162) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.674408) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.674429) EVENT_LOG_v1 {"time_micros": 1769847320674423, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.674458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2064443, prev total WAL file size 2064443, number of live WAL files 2.
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.675151) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1186KB)], [20(7838KB)]
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320675224, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9242130, "oldest_snapshot_seqno": -1}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3350 keys, 7109940 bytes, temperature: kUnknown
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320734516, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7109940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7083734, "index_size": 16752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80694, "raw_average_key_size": 24, "raw_value_size": 7019288, "raw_average_value_size": 2095, "num_data_blocks": 741, "num_entries": 3350, "num_filter_entries": 3350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.734722) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7109940 bytes
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.737209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 119.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(13.5) write-amplify(5.9) OK, records in: 3800, records dropped: 450 output_compression: NoCompression
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.737228) EVENT_LOG_v1 {"time_micros": 1769847320737218, "job": 6, "event": "compaction_finished", "compaction_time_micros": 59353, "compaction_time_cpu_micros": 24566, "output_level": 6, "num_output_files": 1, "total_output_size": 7109940, "num_input_records": 3800, "num_output_records": 3350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320737412, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320738224, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.675044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:15:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:21.029713904 +0000 UTC m=+0.061701539 container create af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:20.986854364 +0000 UTC m=+0.018842009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:15:21 np0005603663 systemd[1]: Started libpod-conmon-af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4.scope.
Jan 31 03:15:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:21.141641685 +0000 UTC m=+0.173629350 container init af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:21.147896147 +0000 UTC m=+0.179883782 container start af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:15:21 np0005603663 exciting_bhabha[135025]: 167 167
Jan 31 03:15:21 np0005603663 systemd[1]: libpod-af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4.scope: Deactivated successfully.
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:21.232191357 +0000 UTC m=+0.264179002 container attach af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:21.232579048 +0000 UTC m=+0.264566673 container died af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:15:21 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b1ce77ed7bc5b6a9d47fa4797a0b4f3ae77f6c57467c1f97231689b7a98804b9-merged.mount: Deactivated successfully.
Jan 31 03:15:21 np0005603663 python3.9[135029]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:15:21 np0005603663 podman[134935]: 2026-01-31 08:15:21.496072251 +0000 UTC m=+0.528059876 container remove af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:15:21 np0005603663 systemd[1]: libpod-conmon-af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4.scope: Deactivated successfully.
Jan 31 03:15:21 np0005603663 podman[135059]: 2026-01-31 08:15:21.601352458 +0000 UTC m=+0.035708363 container create 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:15:21 np0005603663 systemd[1]: Started libpod-conmon-0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f.scope.
Jan 31 03:15:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:15:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:21 np0005603663 podman[135059]: 2026-01-31 08:15:21.58506089 +0000 UTC m=+0.019416845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:15:21 np0005603663 podman[135059]: 2026-01-31 08:15:21.695196542 +0000 UTC m=+0.129552477 container init 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:15:21 np0005603663 podman[135059]: 2026-01-31 08:15:21.700201409 +0000 UTC m=+0.134557314 container start 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:15:21 np0005603663 podman[135059]: 2026-01-31 08:15:21.70420442 +0000 UTC m=+0.138560325 container attach 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:15:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]: {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:    "0": [
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:        {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "devices": [
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "/dev/loop3"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            ],
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_name": "ceph_lv0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_size": "21470642176",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "name": "ceph_lv0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "tags": {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cluster_name": "ceph",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.crush_device_class": "",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.encrypted": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.objectstore": "bluestore",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osd_id": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.type": "block",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.vdo": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.with_tpm": "0"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            },
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "type": "block",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "vg_name": "ceph_vg0"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:        }
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:    ],
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:    "1": [
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:        {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "devices": [
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "/dev/loop4"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            ],
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_name": "ceph_lv1",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_size": "21470642176",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "name": "ceph_lv1",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "tags": {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cluster_name": "ceph",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.crush_device_class": "",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.encrypted": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.objectstore": "bluestore",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osd_id": "1",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.type": "block",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.vdo": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.with_tpm": "0"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            },
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "type": "block",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "vg_name": "ceph_vg1"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:        }
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:    ],
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:    "2": [
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:        {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "devices": [
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "/dev/loop5"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            ],
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_name": "ceph_lv2",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_size": "21470642176",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "name": "ceph_lv2",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "tags": {
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.cluster_name": "ceph",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.crush_device_class": "",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.encrypted": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.objectstore": "bluestore",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osd_id": "2",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.type": "block",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.vdo": "0",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:                "ceph.with_tpm": "0"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            },
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "type": "block",
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:            "vg_name": "ceph_vg2"
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:        }
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]:    ]
Jan 31 03:15:21 np0005603663 dazzling_cohen[135076]: }
Jan 31 03:15:21 np0005603663 systemd[1]: libpod-0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f.scope: Deactivated successfully.
Jan 31 03:15:21 np0005603663 podman[135059]: 2026-01-31 08:15:21.981738349 +0000 UTC m=+0.416094244 container died 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:15:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f-merged.mount: Deactivated successfully.
Jan 31 03:15:22 np0005603663 podman[135059]: 2026-01-31 08:15:22.017827202 +0000 UTC m=+0.452183107 container remove 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:15:22 np0005603663 systemd[1]: libpod-conmon-0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f.scope: Deactivated successfully.
Jan 31 03:15:22 np0005603663 python3.9[135173]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.403058106 +0000 UTC m=+0.048615849 container create fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:22 np0005603663 systemd[1]: Started libpod-conmon-fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f.scope.
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.379321162 +0000 UTC m=+0.024878975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:15:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.490670387 +0000 UTC m=+0.136228210 container init fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.497842985 +0000 UTC m=+0.143400718 container start fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.501114815 +0000 UTC m=+0.146672658 container attach fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:22 np0005603663 heuristic_moser[135256]: 167 167
Jan 31 03:15:22 np0005603663 systemd[1]: libpod-fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f.scope: Deactivated successfully.
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.503064518 +0000 UTC m=+0.148622291 container died fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:15:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b925cfa9c0be030232c1f8e4e156d3d600e2210a0ebe3ab602d03bf46680be27-merged.mount: Deactivated successfully.
Jan 31 03:15:22 np0005603663 podman[135238]: 2026-01-31 08:15:22.538411091 +0000 UTC m=+0.183968824 container remove fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:15:22 np0005603663 systemd[1]: libpod-conmon-fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f.scope: Deactivated successfully.
Jan 31 03:15:22 np0005603663 podman[135279]: 2026-01-31 08:15:22.67389067 +0000 UTC m=+0.045555714 container create 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:15:22 np0005603663 systemd[1]: Started libpod-conmon-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope.
Jan 31 03:15:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:15:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603663 podman[135279]: 2026-01-31 08:15:22.735984849 +0000 UTC m=+0.107649893 container init 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:15:22 np0005603663 podman[135279]: 2026-01-31 08:15:22.741854601 +0000 UTC m=+0.113519655 container start 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:15:22 np0005603663 podman[135279]: 2026-01-31 08:15:22.745592694 +0000 UTC m=+0.117257768 container attach 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:15:22 np0005603663 podman[135279]: 2026-01-31 08:15:22.652174823 +0000 UTC m=+0.023839907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:15:23 np0005603663 lvm[135374]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:15:23 np0005603663 lvm[135374]: VG ceph_vg0 finished
Jan 31 03:15:23 np0005603663 lvm[135375]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:15:23 np0005603663 lvm[135375]: VG ceph_vg1 finished
Jan 31 03:15:23 np0005603663 lvm[135377]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:15:23 np0005603663 lvm[135377]: VG ceph_vg2 finished
Jan 31 03:15:23 np0005603663 festive_almeida[135295]: {}
Jan 31 03:15:23 np0005603663 systemd[1]: libpod-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope: Deactivated successfully.
Jan 31 03:15:23 np0005603663 systemd[1]: libpod-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope: Consumed 1.024s CPU time.
Jan 31 03:15:23 np0005603663 podman[135279]: 2026-01-31 08:15:23.471053443 +0000 UTC m=+0.842718487 container died 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:15:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1-merged.mount: Deactivated successfully.
Jan 31 03:15:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:24 np0005603663 podman[135279]: 2026-01-31 08:15:24.031507069 +0000 UTC m=+1.403172113 container remove 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:15:24 np0005603663 systemd[1]: libpod-conmon-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope: Deactivated successfully.
Jan 31 03:15:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:15:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:15:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:15:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:15:24 np0005603663 python3.9[135569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:15:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:15:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:15:25 np0005603663 python3[135724]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 03:15:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:26 np0005603663 python3.9[135876]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:26 np0005603663 python3.9[136028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:27 np0005603663 python3.9[136106]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:27 np0005603663 python3.9[136258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:28 np0005603663 python3.9[136336]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.611d6u4w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:29 np0005603663 python3.9[136488]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:29 np0005603663 python3.9[136566]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:30 np0005603663 python3.9[136718]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:30 np0005603663 python3[136871]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 03:15:31 np0005603663 python3.9[137023]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:15:31
Jan 31 03:15:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:15:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:15:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'vms']
Jan 31 03:15:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:15:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:32 np0005603663 python3.9[137148]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847330.9402416-152-35645192379239/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:32 np0005603663 python3.9[137300]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:15:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:15:33 np0005603663 python3.9[137425]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847332.2327523-167-215500486561212/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:33 np0005603663 python3.9[137577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:34 np0005603663 python3.9[137702]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847333.4125764-182-185652564697319/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:34 np0005603663 python3.9[137854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:35 np0005603663 python3.9[137979]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847334.477682-197-248212656413907/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:36 np0005603663 python3.9[138131]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:36 np0005603663 python3.9[138256]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847335.5875292-212-103803243444494/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:37 np0005603663 python3.9[138408]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:37 np0005603663 python3.9[138560]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:38 np0005603663 python3.9[138715]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:39 np0005603663 python3.9[138867]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:39 np0005603663 python3.9[139020]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:15:40 np0005603663 python3.9[139174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:41 np0005603663 python3.9[139329]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:42 np0005603663 python3.9[139479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:15:43 np0005603663 python3.9[139632]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:43 np0005603663 ovs-vsctl[139633]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 03:15:43 np0005603663 python3.9[139785]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:44 np0005603663 python3.9[139940]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:15:44 np0005603663 ovs-vsctl[139941]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 03:15:45 np0005603663 python3.9[140091]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:15:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:45 np0005603663 python3.9[140245]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:46 np0005603663 python3.9[140397]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:46 np0005603663 python3.9[140475]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:47 np0005603663 python3.9[140627]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:47 np0005603663 python3.9[140705]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:48 np0005603663 python3.9[140857]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:48 np0005603663 python3.9[141009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:49 np0005603663 python3.9[141087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:49 np0005603663 python3.9[141239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:50 np0005603663 python3.9[141317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:51 np0005603663 python3.9[141469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:15:51 np0005603663 systemd[1]: Reloading.
Jan 31 03:15:51 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:15:51 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:15:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:52 np0005603663 python3.9[141659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:52 np0005603663 python3.9[141737]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:53 np0005603663 python3.9[141889]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:53 np0005603663 python3.9[141967]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:54 np0005603663 python3.9[142119]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:15:54 np0005603663 systemd[1]: Reloading.
Jan 31 03:15:54 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:15:54 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:15:54 np0005603663 systemd[1]: Starting Create netns directory...
Jan 31 03:15:54 np0005603663 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 03:15:54 np0005603663 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 03:15:54 np0005603663 systemd[1]: Finished Create netns directory.
Jan 31 03:15:55 np0005603663 python3.9[142313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:15:55 np0005603663 python3.9[142465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:56 np0005603663 python3.9[142588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847355.4107118-463-206300139729687/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:57 np0005603663 python3.9[142740]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:57 np0005603663 python3.9[142892]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:15:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:15:58 np0005603663 python3.9[143044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:15:58 np0005603663 python3.9[143167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847358.0324063-496-234962115645165/.source.json _original_basename=.vsglnr4f follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:59 np0005603663 python3.9[143317]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:15:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:01 np0005603663 python3.9[143740]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 03:16:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:02 np0005603663 python3.9[143892]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 03:16:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:03 np0005603663 python3[144044]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 03:16:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:07 np0005603663 podman[144056]: 2026-01-31 08:16:07.41359995 +0000 UTC m=+4.152039014 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 03:16:07 np0005603663 podman[144175]: 2026-01-31 08:16:07.535788634 +0000 UTC m=+0.050450080 container create 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:16:07 np0005603663 podman[144175]: 2026-01-31 08:16:07.505944352 +0000 UTC m=+0.020605868 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 03:16:07 np0005603663 python3[144044]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 03:16:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:08 np0005603663 python3.9[144364]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:16:08 np0005603663 python3.9[144518]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:09 np0005603663 python3.9[144594]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:16:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:09 np0005603663 python3.9[144745]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847369.3068302-574-107954512761227/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:10 np0005603663 python3.9[144821]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:16:10 np0005603663 systemd[1]: Reloading.
Jan 31 03:16:10 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:16:10 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:16:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:11 np0005603663 python3.9[144932]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:16:11 np0005603663 systemd[1]: Reloading.
Jan 31 03:16:11 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:16:11 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:16:11 np0005603663 systemd[1]: Starting ovn_controller container...
Jan 31 03:16:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cd0f4556558e78008ae041da38720c9f50251c774d3f6f444a4642b75d8fdf/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:11 np0005603663 systemd[1]: Started /usr/bin/podman healthcheck run 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae.
Jan 31 03:16:11 np0005603663 podman[144974]: 2026-01-31 08:16:11.863183085 +0000 UTC m=+0.278515907 container init 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:16:11 np0005603663 ovn_controller[144989]: + sudo -E kolla_set_configs
Jan 31 03:16:11 np0005603663 podman[144974]: 2026-01-31 08:16:11.910530638 +0000 UTC m=+0.325863470 container start 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:16:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:11 np0005603663 edpm-start-podman-container[144974]: ovn_controller
Jan 31 03:16:11 np0005603663 systemd[1]: Created slice User Slice of UID 0.
Jan 31 03:16:11 np0005603663 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 03:16:11 np0005603663 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 03:16:11 np0005603663 systemd[1]: Starting User Manager for UID 0...
Jan 31 03:16:11 np0005603663 edpm-start-podman-container[144973]: Creating additional drop-in dependency for "ovn_controller" (14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae)
Jan 31 03:16:11 np0005603663 podman[144996]: 2026-01-31 08:16:11.981605825 +0000 UTC m=+0.065618287 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:16:11 np0005603663 systemd[1]: 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae-250c62c7a9abe11b.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 03:16:11 np0005603663 systemd[1]: 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae-250c62c7a9abe11b.service: Failed with result 'exit-code'.
Jan 31 03:16:11 np0005603663 systemd[1]: Reloading.
Jan 31 03:16:12 np0005603663 systemd[145017]: Queued start job for default target Main User Target.
Jan 31 03:16:12 np0005603663 systemd[145017]: Created slice User Application Slice.
Jan 31 03:16:12 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:16:12 np0005603663 systemd[145017]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 03:16:12 np0005603663 systemd[145017]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:16:12 np0005603663 systemd[145017]: Reached target Paths.
Jan 31 03:16:12 np0005603663 systemd[145017]: Reached target Timers.
Jan 31 03:16:12 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:16:12 np0005603663 systemd[145017]: Starting D-Bus User Message Bus Socket...
Jan 31 03:16:12 np0005603663 systemd[145017]: Starting Create User's Volatile Files and Directories...
Jan 31 03:16:12 np0005603663 systemd[145017]: Finished Create User's Volatile Files and Directories.
Jan 31 03:16:12 np0005603663 systemd[145017]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:16:12 np0005603663 systemd[145017]: Reached target Sockets.
Jan 31 03:16:12 np0005603663 systemd[145017]: Reached target Basic System.
Jan 31 03:16:12 np0005603663 systemd[145017]: Reached target Main User Target.
Jan 31 03:16:12 np0005603663 systemd[145017]: Startup finished in 119ms.
Jan 31 03:16:12 np0005603663 systemd[1]: Started User Manager for UID 0.
Jan 31 03:16:12 np0005603663 systemd[1]: Started ovn_controller container.
Jan 31 03:16:12 np0005603663 systemd[1]: Started Session c1 of User root.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: INFO:__main__:Validating config file
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: INFO:__main__:Writing out command to execute
Jan 31 03:16:12 np0005603663 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: ++ cat /run_command
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + ARGS=
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + sudo kolla_copy_cacerts
Jan 31 03:16:12 np0005603663 systemd[1]: Started Session c2 of User root.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + [[ ! -n '' ]]
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + . kolla_extend_start
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + umask 0022
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 03:16:12 np0005603663 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4113] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4119] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <warn>  [1769847372.4121] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4126] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4130] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4132] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 03:16:12 np0005603663 kernel: br-int: entered promiscuous mode
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 31 03:16:12 np0005603663 systemd-udevd[145121]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 03:16:12 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:12Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4382] manager: (ovn-ade796-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 03:16:12 np0005603663 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4607] device (genev_sys_6081): carrier: link connected
Jan 31 03:16:12 np0005603663 NetworkManager[49054]: <info>  [1769847372.4609] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 03:16:12 np0005603663 systemd-udevd[145124]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:16:13 np0005603663 python3.9[145251]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 03:16:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:16:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2034 writes, 9080 keys, 2034 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2034 writes, 2034 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2034 writes, 9080 keys, 2034 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s#012Interval WAL: 2034 writes, 2034 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    178.9      0.05              0.01         3    0.016       0      0       0.0       0.0#012  L6      1/0    6.78 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    159.4    139.3      0.10              0.04         2    0.052    7245    739       0.0       0.0#012 Sum      1/0    6.78 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    107.9    152.1      0.15              0.05         5    0.031    7245    739       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    110.9    156.1      0.15              0.05         4    0.037    7245    739       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    159.4    139.3      0.10              0.04         2    0.052    7245    739       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    194.4      0.05              0.01         2    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 308.00 MB usage: 687.05 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000107 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(38,595.77 KB,0.188897%) FilterBlock(6,28.36 KB,0.00899179%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:16:13 np0005603663 python3.9[145403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:14 np0005603663 python3.9[145526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847373.4281287-619-80319872166865/.source.yaml _original_basename=.5m6ejl3c follow=False checksum=1a2e4ae73b9ac25b107575967ad92468de0fdd78 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:14 np0005603663 python3.9[145678]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:16:14 np0005603663 ovs-vsctl[145679]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 03:16:15 np0005603663 python3.9[145831]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:16:15 np0005603663 ovs-vsctl[145833]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 03:16:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:16 np0005603663 python3.9[145986]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:16:16 np0005603663 ovs-vsctl[145987]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 03:16:16 np0005603663 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 03:16:16 np0005603663 systemd[1]: session-46.scope: Consumed 50.732s CPU time.
Jan 31 03:16:16 np0005603663 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Jan 31 03:16:16 np0005603663 systemd-logind[793]: Removed session 46.
Jan 31 03:16:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:22 np0005603663 systemd-logind[793]: New session 48 of user zuul.
Jan 31 03:16:22 np0005603663 systemd[1]: Started Session 48 of User zuul.
Jan 31 03:16:22 np0005603663 systemd[1]: Stopping User Manager for UID 0...
Jan 31 03:16:22 np0005603663 systemd[145017]: Activating special unit Exit the Session...
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped target Main User Target.
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped target Basic System.
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped target Paths.
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped target Sockets.
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped target Timers.
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:16:22 np0005603663 systemd[145017]: Closed D-Bus User Message Bus Socket.
Jan 31 03:16:22 np0005603663 systemd[145017]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:16:22 np0005603663 systemd[145017]: Removed slice User Application Slice.
Jan 31 03:16:22 np0005603663 systemd[145017]: Reached target Shutdown.
Jan 31 03:16:22 np0005603663 systemd[145017]: Finished Exit the Session.
Jan 31 03:16:22 np0005603663 systemd[145017]: Reached target Exit the Session.
Jan 31 03:16:22 np0005603663 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 03:16:22 np0005603663 systemd[1]: Stopped User Manager for UID 0.
Jan 31 03:16:22 np0005603663 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 03:16:22 np0005603663 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 03:16:22 np0005603663 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 03:16:22 np0005603663 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 03:16:22 np0005603663 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 03:16:23 np0005603663 python3.9[146167]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:16:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:24 np0005603663 python3.9[146336]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:16:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:16:25 np0005603663 python3.9[146557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:16:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:16:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.320656336 +0000 UTC m=+0.043416309 container create b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:16:25 np0005603663 systemd[1]: Started libpod-conmon-b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119.scope.
Jan 31 03:16:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.393448043 +0000 UTC m=+0.116208016 container init b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.299457146 +0000 UTC m=+0.022217169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.401017684 +0000 UTC m=+0.123777657 container start b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.404353667 +0000 UTC m=+0.127113810 container attach b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:16:25 np0005603663 cranky_booth[146750]: 167 167
Jan 31 03:16:25 np0005603663 systemd[1]: libpod-b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119.scope: Deactivated successfully.
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.406004303 +0000 UTC m=+0.128764276 container died b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fb05b9914f06e9cea799a904a5ce959cea1fa82c25c163add3eae907debb0f3c-merged.mount: Deactivated successfully.
Jan 31 03:16:25 np0005603663 podman[146697]: 2026-01-31 08:16:25.438642221 +0000 UTC m=+0.161402194 container remove b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:16:25 np0005603663 systemd[1]: libpod-conmon-b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119.scope: Deactivated successfully.
Jan 31 03:16:25 np0005603663 podman[146814]: 2026-01-31 08:16:25.553143059 +0000 UTC m=+0.039072738 container create a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:16:25 np0005603663 systemd[1]: Started libpod-conmon-a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1.scope.
Jan 31 03:16:25 np0005603663 python3.9[146806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603663 podman[146814]: 2026-01-31 08:16:25.534942053 +0000 UTC m=+0.020871722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:16:25 np0005603663 podman[146814]: 2026-01-31 08:16:25.643409473 +0000 UTC m=+0.129339142 container init a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:16:25 np0005603663 podman[146814]: 2026-01-31 08:16:25.652475125 +0000 UTC m=+0.138404764 container start a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:25 np0005603663 podman[146814]: 2026-01-31 08:16:25.655768897 +0000 UTC m=+0.141698556 container attach a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:16:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:26 np0005603663 quirky_roentgen[146831]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:16:26 np0005603663 quirky_roentgen[146831]: --> All data devices are unavailable
Jan 31 03:16:26 np0005603663 systemd[1]: libpod-a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1.scope: Deactivated successfully.
Jan 31 03:16:26 np0005603663 podman[146814]: 2026-01-31 08:16:26.059148608 +0000 UTC m=+0.545078287 container died a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:16:26 np0005603663 systemd[1]: var-lib-containers-storage-overlay-effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9-merged.mount: Deactivated successfully.
Jan 31 03:16:26 np0005603663 podman[146814]: 2026-01-31 08:16:26.109827209 +0000 UTC m=+0.595756858 container remove a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:26 np0005603663 systemd[1]: libpod-conmon-a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1.scope: Deactivated successfully.
Jan 31 03:16:26 np0005603663 python3.9[146997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.482466624 +0000 UTC m=+0.039538772 container create a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:16:26 np0005603663 systemd[1]: Started libpod-conmon-a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9.scope.
Jan 31 03:16:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.460628636 +0000 UTC m=+0.017700794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.568027236 +0000 UTC m=+0.125099404 container init a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.573663623 +0000 UTC m=+0.130735771 container start a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:26 np0005603663 ecstatic_cerf[147246]: 167 167
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.577399957 +0000 UTC m=+0.134472115 container attach a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:16:26 np0005603663 systemd[1]: libpod-a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9.scope: Deactivated successfully.
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.577877501 +0000 UTC m=+0.134949679 container died a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:16:26 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6b8b2474f27cb6631f26a732cb7771af7f9b94704e21c815bb9bd606b45adc97-merged.mount: Deactivated successfully.
Jan 31 03:16:26 np0005603663 podman[147184]: 2026-01-31 08:16:26.618569154 +0000 UTC m=+0.175641312 container remove a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:16:26 np0005603663 systemd[1]: libpod-conmon-a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9.scope: Deactivated successfully.
Jan 31 03:16:26 np0005603663 python3.9[147248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:26 np0005603663 podman[147272]: 2026-01-31 08:16:26.782704594 +0000 UTC m=+0.060584998 container create c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:16:26 np0005603663 systemd[1]: Started libpod-conmon-c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48.scope.
Jan 31 03:16:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:26 np0005603663 podman[147272]: 2026-01-31 08:16:26.759798536 +0000 UTC m=+0.037678980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:16:26 np0005603663 podman[147272]: 2026-01-31 08:16:26.872982737 +0000 UTC m=+0.150863151 container init c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:16:26 np0005603663 podman[147272]: 2026-01-31 08:16:26.878633685 +0000 UTC m=+0.156514059 container start c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:16:26 np0005603663 podman[147272]: 2026-01-31 08:16:26.882868802 +0000 UTC m=+0.160749196 container attach c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]: {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:    "0": [
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:        {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "devices": [
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "/dev/loop3"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            ],
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_name": "ceph_lv0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_size": "21470642176",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "name": "ceph_lv0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "tags": {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cluster_name": "ceph",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.crush_device_class": "",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.encrypted": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.objectstore": "bluestore",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osd_id": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.type": "block",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.vdo": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.with_tpm": "0"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            },
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "type": "block",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "vg_name": "ceph_vg0"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:        }
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:    ],
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:    "1": [
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:        {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "devices": [
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "/dev/loop4"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            ],
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_name": "ceph_lv1",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_size": "21470642176",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "name": "ceph_lv1",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "tags": {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cluster_name": "ceph",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.crush_device_class": "",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.encrypted": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.objectstore": "bluestore",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osd_id": "1",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.type": "block",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.vdo": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.with_tpm": "0"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            },
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "type": "block",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "vg_name": "ceph_vg1"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:        }
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:    ],
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:    "2": [
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:        {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "devices": [
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "/dev/loop5"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            ],
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_name": "ceph_lv2",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_size": "21470642176",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "name": "ceph_lv2",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "tags": {
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.cluster_name": "ceph",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.crush_device_class": "",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.encrypted": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.objectstore": "bluestore",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osd_id": "2",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.type": "block",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.vdo": "0",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:                "ceph.with_tpm": "0"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            },
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "type": "block",
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:            "vg_name": "ceph_vg2"
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:        }
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]:    ]
Jan 31 03:16:27 np0005603663 elegant_volhard[147313]: }
Jan 31 03:16:27 np0005603663 systemd[1]: libpod-c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48.scope: Deactivated successfully.
Jan 31 03:16:27 np0005603663 podman[147272]: 2026-01-31 08:16:27.172396664 +0000 UTC m=+0.450277108 container died c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:16:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3-merged.mount: Deactivated successfully.
Jan 31 03:16:27 np0005603663 podman[147272]: 2026-01-31 08:16:27.226757547 +0000 UTC m=+0.504637951 container remove c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:16:27 np0005603663 systemd[1]: libpod-conmon-c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48.scope: Deactivated successfully.
Jan 31 03:16:27 np0005603663 python3.9[147447]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.670739649 +0000 UTC m=+0.046453685 container create d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:16:27 np0005603663 systemd[1]: Started libpod-conmon-d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398.scope.
Jan 31 03:16:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.652416149 +0000 UTC m=+0.028130185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.748405681 +0000 UTC m=+0.124119707 container init d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.754319046 +0000 UTC m=+0.130033062 container start d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.757520885 +0000 UTC m=+0.133234901 container attach d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:16:27 np0005603663 determined_volhard[147615]: 167 167
Jan 31 03:16:27 np0005603663 systemd[1]: libpod-d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398.scope: Deactivated successfully.
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.758169113 +0000 UTC m=+0.133883149 container died d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:16:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-86d67a8f1c5414d1b469ff923953ecd03d8bf9f9ba0f9289df35a9bddb2d5099-merged.mount: Deactivated successfully.
Jan 31 03:16:27 np0005603663 podman[147589]: 2026-01-31 08:16:27.797886279 +0000 UTC m=+0.173600275 container remove d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:16:27 np0005603663 systemd[1]: libpod-conmon-d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398.scope: Deactivated successfully.
Jan 31 03:16:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:27 np0005603663 podman[147676]: 2026-01-31 08:16:27.937490656 +0000 UTC m=+0.042660589 container create 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:27 np0005603663 systemd[1]: Started libpod-conmon-523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1.scope.
Jan 31 03:16:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:16:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:28 np0005603663 podman[147676]: 2026-01-31 08:16:28.011563688 +0000 UTC m=+0.116733621 container init 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:16:28 np0005603663 podman[147676]: 2026-01-31 08:16:27.920320708 +0000 UTC m=+0.025490641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:16:28 np0005603663 podman[147676]: 2026-01-31 08:16:28.017562585 +0000 UTC m=+0.122732498 container start 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:16:28 np0005603663 podman[147676]: 2026-01-31 08:16:28.020562579 +0000 UTC m=+0.125732492 container attach 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:16:28 np0005603663 python3.9[147731]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 03:16:28 np0005603663 lvm[147806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:16:28 np0005603663 lvm[147806]: VG ceph_vg0 finished
Jan 31 03:16:28 np0005603663 lvm[147809]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:16:28 np0005603663 lvm[147809]: VG ceph_vg1 finished
Jan 31 03:16:28 np0005603663 lvm[147810]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:16:28 np0005603663 lvm[147810]: VG ceph_vg2 finished
Jan 31 03:16:28 np0005603663 pensive_yonath[147729]: {}
Jan 31 03:16:28 np0005603663 systemd[1]: libpod-523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1.scope: Deactivated successfully.
Jan 31 03:16:28 np0005603663 podman[147676]: 2026-01-31 08:16:28.770091087 +0000 UTC m=+0.875261000 container died 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:16:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876-merged.mount: Deactivated successfully.
Jan 31 03:16:29 np0005603663 podman[147676]: 2026-01-31 08:16:29.2187914 +0000 UTC m=+1.323961323 container remove 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:16:29 np0005603663 systemd[1]: libpod-conmon-523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1.scope: Deactivated successfully.
Jan 31 03:16:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:16:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:16:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:16:29 np0005603663 python3.9[147975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:16:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:30 np0005603663 python3.9[148123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847388.8327312-81-119805003470172/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:16:30 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:16:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:30 np0005603663 python3.9[148274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:31 np0005603663 python3.9[148395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847390.471877-96-196558196739024/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:16:31
Jan 31 03:16:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:16:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:16:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes']
Jan 31 03:16:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:16:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:32 np0005603663 python3.9[148547]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:16:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:16:33 np0005603663 python3.9[148631]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:16:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:35 np0005603663 python3.9[148784]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:16:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:36 np0005603663 python3.9[148937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:36 np0005603663 python3.9[149058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847395.6060321-133-217847457028105/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:37 np0005603663 python3.9[149208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:37 np0005603663 python3.9[149329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847396.668276-133-41418450536309/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:38 np0005603663 python3.9[149479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:39 np0005603663 python3.9[149600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847398.3192205-177-184363009209326/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:39 np0005603663 python3.9[149750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:40 np0005603663 python3.9[149871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847399.37619-177-1710377557794/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:40 np0005603663 python3.9[150021]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:16:41 np0005603663 python3.9[150175]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:42 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:42Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Jan 31 03:16:42 np0005603663 ovn_controller[144989]: 2026-01-31T08:16:42Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 03:16:42 np0005603663 podman[150299]: 2026-01-31 08:16:42.143727991 +0000 UTC m=+0.119490638 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:16:42 np0005603663 python3.9[150342]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:42 np0005603663 python3.9[150431]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:16:43 np0005603663 python3.9[150583]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:43 np0005603663 python3.9[150661]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:44 np0005603663 python3.9[150813]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:44 np0005603663 python3.9[150965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:45 np0005603663 python3.9[151043]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:46 np0005603663 python3.9[151195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:46 np0005603663 python3.9[151273]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:47 np0005603663 python3.9[151425]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:16:47 np0005603663 systemd[1]: Reloading.
Jan 31 03:16:47 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:16:47 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:16:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:48 np0005603663 python3.9[151614]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:48 np0005603663 python3.9[151692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:49 np0005603663 python3.9[151844]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:49 np0005603663 python3.9[151922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:50 np0005603663 python3.9[152074]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:16:50 np0005603663 systemd[1]: Reloading.
Jan 31 03:16:50 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:16:50 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:16:50 np0005603663 systemd[1]: Starting Create netns directory...
Jan 31 03:16:50 np0005603663 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 03:16:50 np0005603663 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 03:16:50 np0005603663 systemd[1]: Finished Create netns directory.
Jan 31 03:16:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:51 np0005603663 python3.9[152267]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:51 np0005603663 python3.9[152419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.058405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412058501, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 946, "num_deletes": 251, "total_data_size": 1403443, "memory_usage": 1427776, "flush_reason": "Manual Compaction"}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412070459, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1380275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8842, "largest_seqno": 9787, "table_properties": {"data_size": 1375588, "index_size": 2275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9605, "raw_average_key_size": 18, "raw_value_size": 1366284, "raw_average_value_size": 2658, "num_data_blocks": 106, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847321, "oldest_key_time": 1769847321, "file_creation_time": 1769847412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12130 microseconds, and 6280 cpu microseconds.
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.070549) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1380275 bytes OK
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.070584) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.072491) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.072519) EVENT_LOG_v1 {"time_micros": 1769847412072511, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.072546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1398922, prev total WAL file size 1398922, number of live WAL files 2.
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.073103) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1347KB)], [23(6943KB)]
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412073147, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8490215, "oldest_snapshot_seqno": -1}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3350 keys, 6612133 bytes, temperature: kUnknown
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412117916, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6612133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6587109, "index_size": 15571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81374, "raw_average_key_size": 24, "raw_value_size": 6523808, "raw_average_value_size": 1947, "num_data_blocks": 678, "num_entries": 3350, "num_filter_entries": 3350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.118311) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6612133 bytes
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.120130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.1 rd, 147.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 3864, records dropped: 514 output_compression: NoCompression
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.120157) EVENT_LOG_v1 {"time_micros": 1769847412120142, "job": 8, "event": "compaction_finished", "compaction_time_micros": 44908, "compaction_time_cpu_micros": 21063, "output_level": 6, "num_output_files": 1, "total_output_size": 6612133, "num_input_records": 3864, "num_output_records": 3350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412120459, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412121201, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.073053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:16:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:16:52 np0005603663 python3.9[152542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847411.2888505-328-15952052968702/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:53 np0005603663 python3.9[152694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:53 np0005603663 python3.9[152846]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:16:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:54 np0005603663 python3.9[152998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:16:54 np0005603663 python3.9[153121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847413.7682061-361-10031163139324/.source.json _original_basename=.jy0favfc follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:55 np0005603663 python3.9[153271]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:16:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:16:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:57 np0005603663 python3.9[153694]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 03:16:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:16:58 np0005603663 python3.9[153846]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 03:16:59 np0005603663 python3[153998]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 03:16:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:10 np0005603663 podman[154012]: 2026-01-31 08:17:10.349212958 +0000 UTC m=+11.107393778 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:17:10 np0005603663 podman[154134]: 2026-01-31 08:17:10.447220907 +0000 UTC m=+0.025054878 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:17:11 np0005603663 podman[154134]: 2026-01-31 08:17:11.052154081 +0000 UTC m=+0.629987982 container create 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:17:11 np0005603663 python3[153998]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:17:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:11 np0005603663 python3.9[154322]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:17:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:12 np0005603663 podman[154448]: 2026-01-31 08:17:12.423538253 +0000 UTC m=+0.225430428 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:17:12 np0005603663 python3.9[154492]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:12 np0005603663 python3.9[154577]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:17:13 np0005603663 python3.9[154728]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847432.9207106-439-19935681485815/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:14 np0005603663 python3.9[154804]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:17:14 np0005603663 systemd[1]: Reloading.
Jan 31 03:17:14 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:17:14 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:17:14 np0005603663 python3.9[154916]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:14 np0005603663 systemd[1]: Reloading.
Jan 31 03:17:14 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:17:14 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:17:15 np0005603663 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 03:17:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881fad8bab9bcb9120aebd18d25ae3dd80dcb5d3d3be236d25ba70ef23eaf771/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881fad8bab9bcb9120aebd18d25ae3dd80dcb5d3d3be236d25ba70ef23eaf771/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:15 np0005603663 systemd[1]: Started /usr/bin/podman healthcheck run 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869.
Jan 31 03:17:15 np0005603663 podman[154956]: 2026-01-31 08:17:15.490670409 +0000 UTC m=+0.216978892 container init 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + sudo -E kolla_set_configs
Jan 31 03:17:15 np0005603663 podman[154956]: 2026-01-31 08:17:15.540011153 +0000 UTC m=+0.266319596 container start 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:17:15 np0005603663 edpm-start-podman-container[154956]: ovn_metadata_agent
Jan 31 03:17:15 np0005603663 podman[154979]: 2026-01-31 08:17:15.700529292 +0000 UTC m=+0.149996507 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:17:15 np0005603663 edpm-start-podman-container[154955]: Creating additional drop-in dependency for "ovn_metadata_agent" (5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869)
Jan 31 03:17:15 np0005603663 systemd[1]: Reloading.
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Validating config file
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Copying service configuration files
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Writing out command to execute
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: ++ cat /run_command
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + CMD=neutron-ovn-metadata-agent
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + ARGS=
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + sudo kolla_copy_cacerts
Jan 31 03:17:15 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:17:15 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + [[ ! -n '' ]]
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + . kolla_extend_start
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + umask 0022
Jan 31 03:17:15 np0005603663 ovn_metadata_agent[154972]: + exec neutron-ovn-metadata-agent
Jan 31 03:17:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:16 np0005603663 systemd[1]: Started ovn_metadata_agent container.
Jan 31 03:17:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:17:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5575 writes, 24K keys, 5575 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5575 writes, 837 syncs, 6.66 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5575 writes, 24K keys, 5575 commit groups, 1.0 writes per commit group, ingest: 18.85 MB, 0.03 MB/s#012Interval WAL: 5575 writes, 837 syncs, 6.66 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 03:17:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:16 np0005603663 python3.9[155208]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 03:17:17 np0005603663 python3.9[155361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.824 154977 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.825 154977 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.825 154977 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.825 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.862 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.862 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.872 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.872 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.872 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.873 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.873 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.887 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c8bc61c4-1b90-42d4-9c52-3d83532ede66 (UUID: c8bc61c4-1b90-42d4-9c52-3d83532ede66) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.915 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.920 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.926 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c8bc61c4-1b90-42d4-9c52-3d83532ede66'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7efc988fdd30>], external_ids={}, name=c8bc61c4-1b90-42d4-9c52-3d83532ede66, nb_cfg_timestamp=1769847380441, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.927 154977 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7efc9887ec10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.931 154977 DEBUG oslo_service.service [-] Started child 155459 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.934 154977 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3_v07x8g/privsep.sock']#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.935 155459 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-166633'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 31 03:17:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.959 155459 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.960 155459 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.960 155459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.965 155459 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.974 155459 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 03:17:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.982 155459 INFO eventlet.wsgi.server [-] (155459) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 31 03:17:18 np0005603663 python3.9[155490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847437.1628768-484-247964179401463/.source.yaml _original_basename=.tf7twk81 follow=False checksum=123065ba71fa8a2d5bb23ca29c6be2688936190b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:18 np0005603663 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.579 154977 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.580 154977 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3_v07x8g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.481 155516 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.486 155516 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.490 155516 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.490 155516 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155516#033[00m
Jan 31 03:17:18 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.584 155516 DEBUG oslo.privsep.daemon [-] privsep: reply[597cebf1-dffc-4322-9f34-1b632162ba4a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:18 np0005603663 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 03:17:18 np0005603663 systemd[1]: session-48.scope: Consumed 48.778s CPU time.
Jan 31 03:17:18 np0005603663 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Jan 31 03:17:18 np0005603663 systemd-logind[793]: Removed session 48.
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.070 155516 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.070 155516 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.070 155516 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.627 155516 DEBUG oslo.privsep.daemon [-] privsep: reply[47a9d3b1-0417-4b42-8727-9b64ba7a929d]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.630 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, column=external_ids, values=({'neutron:ovn-metadata-id': '55e132ff-622c-524b-8a5a-3db2e758bc47'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.693 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.800 154977 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.819 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:17:19 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 03:17:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:17:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 6832 writes, 29K keys, 6832 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6832 writes, 1235 syncs, 5.53 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6832 writes, 29K keys, 6832 commit groups, 1.0 writes per commit group, ingest: 19.93 MB, 0.03 MB/s#012Interval WAL: 6832 writes, 1235 syncs, 5.53 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 31 03:17:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:25 np0005603663 systemd-logind[793]: New session 49 of user zuul.
Jan 31 03:17:25 np0005603663 systemd[1]: Started Session 49 of User zuul.
Jan 31 03:17:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:26 np0005603663 python3.9[155674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:17:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:27 np0005603663 python3.9[155830]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:17:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.8 total, 600.0 interval#012Cumulative writes: 5364 writes, 23K keys, 5364 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5364 writes, 713 syncs, 7.52 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5364 writes, 23K keys, 5364 commit groups, 1.0 writes per commit group, ingest: 18.56 MB, 0.03 MB/s#012Interval WAL: 5364 writes, 713 syncs, 7.52 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 03:17:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:28 np0005603663 python3.9[155995]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:17:28 np0005603663 systemd[1]: Reloading.
Jan 31 03:17:28 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:17:28 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:17:29 np0005603663 python3.9[156179]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:17:29 np0005603663 network[156196]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:17:29 np0005603663 network[156197]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:17:29 np0005603663 network[156198]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:17:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:17:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.343328503 +0000 UTC m=+0.057234674 container create 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.310929478 +0000 UTC m=+0.024835679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:17:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:17:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:31 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:17:31 np0005603663 systemd[1]: Started libpod-conmon-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope.
Jan 31 03:17:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.538607573 +0000 UTC m=+0.252513814 container init 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.547992266 +0000 UTC m=+0.261898467 container start 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.561553474 +0000 UTC m=+0.275459685 container attach 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:17:31 np0005603663 busy_jang[156534]: 167 167
Jan 31 03:17:31 np0005603663 systemd[1]: libpod-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope: Deactivated successfully.
Jan 31 03:17:31 np0005603663 conmon[156534]: conmon 2949aef1c4b4875ee078 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope/container/memory.events
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.565115351 +0000 UTC m=+0.279021552 container died 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-319779b9a1984a67a2e776dfff1cb499b58137d7a7009f1eddd573bd58adf7fd-merged.mount: Deactivated successfully.
Jan 31 03:17:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:31 np0005603663 podman[156507]: 2026-01-31 08:17:31.660207155 +0000 UTC m=+0.374113316 container remove 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:17:31 np0005603663 systemd[1]: libpod-conmon-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope: Deactivated successfully.
Jan 31 03:17:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:17:31
Jan 31 03:17:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:17:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:17:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'backups', 'default.rgw.control']
Jan 31 03:17:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:17:31 np0005603663 podman[156588]: 2026-01-31 08:17:31.818409138 +0000 UTC m=+0.066763211 container create 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:17:31 np0005603663 systemd[1]: Started libpod-conmon-475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6.scope.
Jan 31 03:17:31 np0005603663 podman[156588]: 2026-01-31 08:17:31.781461956 +0000 UTC m=+0.029816019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:17:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:31 np0005603663 podman[156588]: 2026-01-31 08:17:31.936134023 +0000 UTC m=+0.184488096 container init 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:17:31 np0005603663 podman[156588]: 2026-01-31 08:17:31.94336107 +0000 UTC m=+0.191715123 container start 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:17:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:31 np0005603663 podman[156588]: 2026-01-31 08:17:31.970386454 +0000 UTC m=+0.218740517 container attach 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 03:17:32 np0005603663 cool_goodall[156628]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:17:32 np0005603663 cool_goodall[156628]: --> All data devices are unavailable
Jan 31 03:17:32 np0005603663 systemd[1]: libpod-475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6.scope: Deactivated successfully.
Jan 31 03:17:32 np0005603663 podman[156588]: 2026-01-31 08:17:32.428315623 +0000 UTC m=+0.676669686 container died 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:17:32 np0005603663 python3.9[156737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9-merged.mount: Deactivated successfully.
Jan 31 03:17:32 np0005603663 podman[156588]: 2026-01-31 08:17:32.572329039 +0000 UTC m=+0.820683102 container remove 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:17:32 np0005603663 systemd[1]: libpod-conmon-475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6.scope: Deactivated successfully.
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:17:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:33.056680262 +0000 UTC m=+0.097109804 container create fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:32.985117368 +0000 UTC m=+0.025546920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:17:33 np0005603663 systemd[1]: Started libpod-conmon-fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c.scope.
Jan 31 03:17:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:33 np0005603663 python3.9[156968]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:33.205008358 +0000 UTC m=+0.245437900 container init fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:33.212938657 +0000 UTC m=+0.253368169 container start fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:17:33 np0005603663 mystifying_khorana[156997]: 167 167
Jan 31 03:17:33 np0005603663 systemd[1]: libpod-fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c.scope: Deactivated successfully.
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:33.271589453 +0000 UTC m=+0.312019005 container attach fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:33.273067217 +0000 UTC m=+0.313496749 container died fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-477d064de361f52d530c8496a3506ed45cd5fe87d7d05f8113e84c59fa6d98fa-merged.mount: Deactivated successfully.
Jan 31 03:17:33 np0005603663 podman[156981]: 2026-01-31 08:17:33.366719337 +0000 UTC m=+0.407148869 container remove fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:17:33 np0005603663 systemd[1]: libpod-conmon-fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c.scope: Deactivated successfully.
Jan 31 03:17:33 np0005603663 podman[157122]: 2026-01-31 08:17:33.554686627 +0000 UTC m=+0.077351310 container create 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:17:33 np0005603663 podman[157122]: 2026-01-31 08:17:33.507227298 +0000 UTC m=+0.029892031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:17:33 np0005603663 systemd[1]: Started libpod-conmon-9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3.scope.
Jan 31 03:17:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:33 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:33 np0005603663 podman[157122]: 2026-01-31 08:17:33.738998526 +0000 UTC m=+0.261663249 container init 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:33 np0005603663 podman[157122]: 2026-01-31 08:17:33.749515523 +0000 UTC m=+0.272180206 container start 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:17:33 np0005603663 podman[157122]: 2026-01-31 08:17:33.759356919 +0000 UTC m=+0.282021572 container attach 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:17:33 np0005603663 python3.9[157187]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]: {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:    "0": [
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:        {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "devices": [
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "/dev/loop3"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            ],
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_name": "ceph_lv0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_size": "21470642176",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "name": "ceph_lv0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "tags": {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cluster_name": "ceph",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.crush_device_class": "",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.encrypted": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.objectstore": "bluestore",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osd_id": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.type": "block",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.vdo": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.with_tpm": "0"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            },
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "type": "block",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "vg_name": "ceph_vg0"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:        }
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:    ],
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:    "1": [
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:        {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "devices": [
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "/dev/loop4"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            ],
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_name": "ceph_lv1",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_size": "21470642176",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "name": "ceph_lv1",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "tags": {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cluster_name": "ceph",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.crush_device_class": "",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.encrypted": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.objectstore": "bluestore",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osd_id": "1",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.type": "block",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.vdo": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.with_tpm": "0"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            },
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "type": "block",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "vg_name": "ceph_vg1"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:        }
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:    ],
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:    "2": [
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:        {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "devices": [
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "/dev/loop5"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            ],
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_name": "ceph_lv2",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_size": "21470642176",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "name": "ceph_lv2",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "tags": {
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.cluster_name": "ceph",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.crush_device_class": "",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.encrypted": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.objectstore": "bluestore",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osd_id": "2",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.type": "block",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.vdo": "0",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:                "ceph.with_tpm": "0"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            },
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "type": "block",
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:            "vg_name": "ceph_vg2"
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:        }
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]:    ]
Jan 31 03:17:34 np0005603663 lucid_kilby[157190]: }
Jan 31 03:17:34 np0005603663 systemd[1]: libpod-9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3.scope: Deactivated successfully.
Jan 31 03:17:34 np0005603663 podman[157122]: 2026-01-31 08:17:34.100681617 +0000 UTC m=+0.623346250 container died 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:17:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978-merged.mount: Deactivated successfully.
Jan 31 03:17:34 np0005603663 podman[157122]: 2026-01-31 08:17:34.227233727 +0000 UTC m=+0.749898430 container remove 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:17:34 np0005603663 systemd[1]: libpod-conmon-9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3.scope: Deactivated successfully.
Jan 31 03:17:34 np0005603663 python3.9[157394]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:34 np0005603663 podman[157427]: 2026-01-31 08:17:34.715919212 +0000 UTC m=+0.076076402 container create 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:17:34 np0005603663 podman[157427]: 2026-01-31 08:17:34.665086051 +0000 UTC m=+0.025243301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:17:34 np0005603663 systemd[1]: Started libpod-conmon-845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e.scope.
Jan 31 03:17:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:34 np0005603663 podman[157427]: 2026-01-31 08:17:34.839378349 +0000 UTC m=+0.199535539 container init 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:17:34 np0005603663 podman[157427]: 2026-01-31 08:17:34.84473256 +0000 UTC m=+0.204889730 container start 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:34 np0005603663 tender_montalcini[157469]: 167 167
Jan 31 03:17:34 np0005603663 systemd[1]: libpod-845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e.scope: Deactivated successfully.
Jan 31 03:17:34 np0005603663 podman[157427]: 2026-01-31 08:17:34.848676999 +0000 UTC m=+0.208834179 container attach 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:17:34 np0005603663 podman[157427]: 2026-01-31 08:17:34.848980098 +0000 UTC m=+0.209137268 container died 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:17:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b95bc1f7ae05d5caae1da9ff84ff0ce6eed6ef15095d2e0b669928d461ceb651-merged.mount: Deactivated successfully.
Jan 31 03:17:35 np0005603663 podman[157427]: 2026-01-31 08:17:35.079703315 +0000 UTC m=+0.439860505 container remove 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:17:35 np0005603663 systemd[1]: libpod-conmon-845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e.scope: Deactivated successfully.
Jan 31 03:17:35 np0005603663 podman[157623]: 2026-01-31 08:17:35.30370418 +0000 UTC m=+0.104117876 container create c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:17:35 np0005603663 podman[157623]: 2026-01-31 08:17:35.236215988 +0000 UTC m=+0.036629794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:17:35 np0005603663 systemd[1]: Started libpod-conmon-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope.
Jan 31 03:17:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:17:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:35 np0005603663 python3.9[157617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:35 np0005603663 podman[157623]: 2026-01-31 08:17:35.441169469 +0000 UTC m=+0.241583245 container init c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:17:35 np0005603663 podman[157623]: 2026-01-31 08:17:35.448762598 +0000 UTC m=+0.249176334 container start c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:17:35 np0005603663 podman[157623]: 2026-01-31 08:17:35.479661798 +0000 UTC m=+0.280075594 container attach c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 03:17:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:36 np0005603663 lvm[157873]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:17:36 np0005603663 lvm[157873]: VG ceph_vg1 finished
Jan 31 03:17:36 np0005603663 lvm[157872]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:17:36 np0005603663 lvm[157872]: VG ceph_vg0 finished
Jan 31 03:17:36 np0005603663 lvm[157875]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:17:36 np0005603663 lvm[157875]: VG ceph_vg2 finished
Jan 31 03:17:36 np0005603663 python3.9[157831]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:36 np0005603663 infallible_knuth[157641]: {}
Jan 31 03:17:36 np0005603663 systemd[1]: libpod-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope: Deactivated successfully.
Jan 31 03:17:36 np0005603663 systemd[1]: libpod-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope: Consumed 1.098s CPU time.
Jan 31 03:17:36 np0005603663 podman[157623]: 2026-01-31 08:17:36.387599646 +0000 UTC m=+1.188013372 container died c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:17:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969-merged.mount: Deactivated successfully.
Jan 31 03:17:36 np0005603663 python3.9[158044]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:17:36 np0005603663 podman[157623]: 2026-01-31 08:17:36.992339254 +0000 UTC m=+1.792752960 container remove c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:17:36 np0005603663 systemd[1]: libpod-conmon-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope: Deactivated successfully.
Jan 31 03:17:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:17:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:17:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:17:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:37 np0005603663 python3.9[158222]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:38 np0005603663 python3.9[158374]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:39 np0005603663 python3.9[158526]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:39 np0005603663 python3.9[158678]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:40 np0005603663 python3.9[158830]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:41 np0005603663 python3.9[158982]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:41 np0005603663 python3.9[159134]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:42 np0005603663 python3.9[159286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:42 np0005603663 podman[159410]: 2026-01-31 08:17:42.901096698 +0000 UTC m=+0.095537458 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 03:17:43 np0005603663 python3.9[159455]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:17:43 np0005603663 python3.9[159617]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:44 np0005603663 python3.9[159769]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:44 np0005603663 python3.9[159921]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:45 np0005603663 python3.9[160073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:45 np0005603663 podman[160197]: 2026-01-31 08:17:45.920053248 +0000 UTC m=+0.098286200 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:17:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:46 np0005603663 python3.9[160240]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:17:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:46 np0005603663 python3.9[160394]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:47 np0005603663 python3.9[160546]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 03:17:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:48 np0005603663 python3.9[160698]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:17:48 np0005603663 systemd[1]: Reloading.
Jan 31 03:17:48 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:17:48 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:17:49 np0005603663 python3.9[160885]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:49 np0005603663 python3.9[161038]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:50 np0005603663 python3.9[161191]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:51 np0005603663 python3.9[161344]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:51 np0005603663 python3.9[161497]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:52 np0005603663 python3.9[161650]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:52 np0005603663 python3.9[161803]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:17:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:53 np0005603663 python3.9[161956]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 03:17:54 np0005603663 python3.9[162109]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 03:17:55 np0005603663 python3.9[162267]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 03:17:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:55 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:17:55 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:17:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:17:57 np0005603663 python3.9[162428]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:17:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:17:58 np0005603663 python3.9[162512]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:17:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:13 np0005603663 podman[162586]: 2026-01-31 08:18:13.202749375 +0000 UTC m=+0.089102104 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:18:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:16 np0005603663 podman[162703]: 2026-01-31 08:18:16.16637649 +0000 UTC m=+0.055510303 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 03:18:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:18:17.876 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:18:17.876 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:18:17.876 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:18:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:18:31
Jan 31 03:18:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:18:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:18:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Jan 31 03:18:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:18:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:18:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:18:35 np0005603663 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 03:18:35 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 03:18:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:18:37 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 03:18:37 np0005603663 podman[162854]: 2026-01-31 08:18:37.72611763 +0000 UTC m=+0.083372880 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:18:37 np0005603663 podman[162854]: 2026-01-31 08:18:37.883640445 +0000 UTC m=+0.240895645 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:18:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:18:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:18:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:39 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.737441689 +0000 UTC m=+0.051473376 container create 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:18:39 np0005603663 systemd[1]: Started libpod-conmon-092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7.scope.
Jan 31 03:18:39 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.717891616 +0000 UTC m=+0.031923273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.819090609 +0000 UTC m=+0.133122286 container init 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.826174479 +0000 UTC m=+0.140206166 container start 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.82973148 +0000 UTC m=+0.143763137 container attach 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:39 np0005603663 quizzical_keldysh[163199]: 167 167
Jan 31 03:18:39 np0005603663 systemd[1]: libpod-092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7.scope: Deactivated successfully.
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.834657909 +0000 UTC m=+0.148689586 container died 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:18:39 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bffd6286bd175dd2399a6957c1ff7357cc89828993b89c0dd3d1193d32fc49a1-merged.mount: Deactivated successfully.
Jan 31 03:18:39 np0005603663 podman[163183]: 2026-01-31 08:18:39.893660238 +0000 UTC m=+0.207691915 container remove 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:18:39 np0005603663 systemd[1]: libpod-conmon-092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7.scope: Deactivated successfully.
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.088817418 +0000 UTC m=+0.060317247 container create e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:40 np0005603663 systemd[1]: Started libpod-conmon-e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e.scope.
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.060678052 +0000 UTC m=+0.032177931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:18:40 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:18:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:40 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.207559457 +0000 UTC m=+0.179059266 container init e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.216889991 +0000 UTC m=+0.188389830 container start e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.221333226 +0000 UTC m=+0.192833075 container attach e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:18:40 np0005603663 frosty_cohen[163243]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:18:40 np0005603663 frosty_cohen[163243]: --> All data devices are unavailable
Jan 31 03:18:40 np0005603663 systemd[1]: libpod-e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e.scope: Deactivated successfully.
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.689101237 +0000 UTC m=+0.660601076 container died e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:40 np0005603663 systemd[1]: var-lib-containers-storage-overlay-783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd-merged.mount: Deactivated successfully.
Jan 31 03:18:40 np0005603663 podman[163225]: 2026-01-31 08:18:40.7426047 +0000 UTC m=+0.714104509 container remove e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:18:40 np0005603663 systemd[1]: libpod-conmon-e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e.scope: Deactivated successfully.
Jan 31 03:18:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.225835567 +0000 UTC m=+0.044803268 container create 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:18:41 np0005603663 systemd[1]: Started libpod-conmon-1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d.scope.
Jan 31 03:18:41 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.298804851 +0000 UTC m=+0.117772562 container init 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.303030021 +0000 UTC m=+0.121997712 container start 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.306122898 +0000 UTC m=+0.125090579 container attach 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.210483613 +0000 UTC m=+0.029451344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:18:41 np0005603663 cool_stonebraker[163352]: 167 167
Jan 31 03:18:41 np0005603663 systemd[1]: libpod-1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d.scope: Deactivated successfully.
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.309579356 +0000 UTC m=+0.128547077 container died 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:41 np0005603663 systemd[1]: var-lib-containers-storage-overlay-24093636e9012feba5f33cce0c22639e47ffd8b5c9c85c3df0816e932b291980-merged.mount: Deactivated successfully.
Jan 31 03:18:41 np0005603663 podman[163335]: 2026-01-31 08:18:41.352275234 +0000 UTC m=+0.171242955 container remove 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:18:41 np0005603663 systemd[1]: libpod-conmon-1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d.scope: Deactivated successfully.
Jan 31 03:18:41 np0005603663 podman[163377]: 2026-01-31 08:18:41.525359959 +0000 UTC m=+0.060337037 container create 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:41 np0005603663 systemd[1]: Started libpod-conmon-99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76.scope.
Jan 31 03:18:41 np0005603663 podman[163377]: 2026-01-31 08:18:41.497746508 +0000 UTC m=+0.032723626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:18:41 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:18:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:41 np0005603663 podman[163377]: 2026-01-31 08:18:41.61870692 +0000 UTC m=+0.153684038 container init 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:18:41 np0005603663 podman[163377]: 2026-01-31 08:18:41.632248743 +0000 UTC m=+0.167225811 container start 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:18:41 np0005603663 podman[163377]: 2026-01-31 08:18:41.636558935 +0000 UTC m=+0.171536013 container attach 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:18:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:41 np0005603663 musing_nash[163393]: {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:    "0": [
Jan 31 03:18:41 np0005603663 musing_nash[163393]:        {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "devices": [
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "/dev/loop3"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            ],
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_name": "ceph_lv0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_size": "21470642176",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "name": "ceph_lv0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "tags": {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cluster_name": "ceph",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.crush_device_class": "",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.encrypted": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.objectstore": "bluestore",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osd_id": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.type": "block",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.vdo": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.with_tpm": "0"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            },
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "type": "block",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "vg_name": "ceph_vg0"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:        }
Jan 31 03:18:41 np0005603663 musing_nash[163393]:    ],
Jan 31 03:18:41 np0005603663 musing_nash[163393]:    "1": [
Jan 31 03:18:41 np0005603663 musing_nash[163393]:        {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "devices": [
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "/dev/loop4"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            ],
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_name": "ceph_lv1",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_size": "21470642176",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "name": "ceph_lv1",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "tags": {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cluster_name": "ceph",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.crush_device_class": "",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.encrypted": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.objectstore": "bluestore",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osd_id": "1",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.type": "block",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.vdo": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.with_tpm": "0"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            },
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "type": "block",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "vg_name": "ceph_vg1"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:        }
Jan 31 03:18:41 np0005603663 musing_nash[163393]:    ],
Jan 31 03:18:41 np0005603663 musing_nash[163393]:    "2": [
Jan 31 03:18:41 np0005603663 musing_nash[163393]:        {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "devices": [
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "/dev/loop5"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            ],
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_name": "ceph_lv2",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_size": "21470642176",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "name": "ceph_lv2",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "tags": {
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.cluster_name": "ceph",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.crush_device_class": "",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.encrypted": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.objectstore": "bluestore",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osd_id": "2",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.type": "block",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.vdo": "0",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:                "ceph.with_tpm": "0"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            },
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "type": "block",
Jan 31 03:18:41 np0005603663 musing_nash[163393]:            "vg_name": "ceph_vg2"
Jan 31 03:18:41 np0005603663 musing_nash[163393]:        }
Jan 31 03:18:41 np0005603663 musing_nash[163393]:    ]
Jan 31 03:18:41 np0005603663 musing_nash[163393]: }
Jan 31 03:18:41 np0005603663 systemd[1]: libpod-99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76.scope: Deactivated successfully.
Jan 31 03:18:41 np0005603663 podman[163377]: 2026-01-31 08:18:41.962120493 +0000 UTC m=+0.497097571 container died 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:18:41 np0005603663 systemd[1]: var-lib-containers-storage-overlay-88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6-merged.mount: Deactivated successfully.
Jan 31 03:18:42 np0005603663 podman[163377]: 2026-01-31 08:18:42.01470604 +0000 UTC m=+0.549683108 container remove 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:18:42 np0005603663 systemd[1]: libpod-conmon-99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76.scope: Deactivated successfully.
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.487778601 +0000 UTC m=+0.056270002 container create e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:18:42 np0005603663 systemd[1]: Started libpod-conmon-e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243.scope.
Jan 31 03:18:42 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.463542416 +0000 UTC m=+0.032033877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.567586869 +0000 UTC m=+0.136078260 container init e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.575031179 +0000 UTC m=+0.143522570 container start e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.5785922 +0000 UTC m=+0.147083601 container attach e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 03:18:42 np0005603663 frosty_brahmagupta[163490]: 167 167
Jan 31 03:18:42 np0005603663 systemd[1]: libpod-e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243.scope: Deactivated successfully.
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.582205062 +0000 UTC m=+0.150696463 container died e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:18:42 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8c5f89f6baf97ac9c1cc600035dd7a3f8a0b71f0190eb9795857462ea51d37e5-merged.mount: Deactivated successfully.
Jan 31 03:18:42 np0005603663 podman[163474]: 2026-01-31 08:18:42.625902608 +0000 UTC m=+0.194394009 container remove e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:18:42 np0005603663 systemd[1]: libpod-conmon-e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243.scope: Deactivated successfully.
Jan 31 03:18:42 np0005603663 podman[163513]: 2026-01-31 08:18:42.785305647 +0000 UTC m=+0.045269042 container create 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:18:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:42 np0005603663 systemd[1]: Started libpod-conmon-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope.
Jan 31 03:18:42 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:18:42 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:42 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:42 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:42 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:42 np0005603663 podman[163513]: 2026-01-31 08:18:42.767412531 +0000 UTC m=+0.027375966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:18:42 np0005603663 podman[163513]: 2026-01-31 08:18:42.86745559 +0000 UTC m=+0.127418995 container init 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:18:42 np0005603663 podman[163513]: 2026-01-31 08:18:42.875278862 +0000 UTC m=+0.135242217 container start 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:18:42 np0005603663 podman[163513]: 2026-01-31 08:18:42.878469602 +0000 UTC m=+0.138432947 container attach 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:18:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:18:43 np0005603663 lvm[163616]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:18:43 np0005603663 lvm[163616]: VG ceph_vg1 finished
Jan 31 03:18:43 np0005603663 lvm[163615]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:18:43 np0005603663 lvm[163615]: VG ceph_vg0 finished
Jan 31 03:18:43 np0005603663 lvm[163617]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:18:43 np0005603663 lvm[163617]: VG ceph_vg2 finished
Jan 31 03:18:43 np0005603663 podman[163605]: 2026-01-31 08:18:43.537183083 +0000 UTC m=+0.101171262 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 03:18:43 np0005603663 keen_maxwell[163530]: {}
Jan 31 03:18:43 np0005603663 systemd[1]: libpod-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope: Deactivated successfully.
Jan 31 03:18:43 np0005603663 systemd[1]: libpod-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope: Consumed 1.008s CPU time.
Jan 31 03:18:43 np0005603663 podman[163513]: 2026-01-31 08:18:43.582609708 +0000 UTC m=+0.842573103 container died 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:18:43 np0005603663 systemd[1]: var-lib-containers-storage-overlay-41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0-merged.mount: Deactivated successfully.
Jan 31 03:18:43 np0005603663 podman[163513]: 2026-01-31 08:18:43.630244636 +0000 UTC m=+0.890208061 container remove 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:18:43 np0005603663 systemd[1]: libpod-conmon-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope: Deactivated successfully.
Jan 31 03:18:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:18:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:18:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:43 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:18:44 np0005603663 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 03:18:44 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 03:18:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:47 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 03:18:47 np0005603663 podman[163684]: 2026-01-31 08:18:47.174120912 +0000 UTC m=+0.064624859 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 03:18:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:18:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:18:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:14 np0005603663 podman[179769]: 2026-01-31 08:19:14.166244772 +0000 UTC m=+0.062525129 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:19:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:19:17.877 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:19:17.877 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:19:17.877 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:18 np0005603663 podman[180586]: 2026-01-31 08:19:18.000113612 +0000 UTC m=+0.053962747 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:19:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:29 np0005603663 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability open_perms=1
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability always_check_network=0
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 03:19:29 np0005603663 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 03:19:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:19:31
Jan 31 03:19:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:19:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:19:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root']
Jan 31 03:19:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:19:32 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 03:19:32 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 03:19:32 np0005603663 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:19:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:19:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:41 np0005603663 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 03:19:41 np0005603663 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 03:19:41 np0005603663 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 03:19:41 np0005603663 systemd[1]: sshd.service: Consumed 2.442s CPU time, read 32.0K from disk, written 28.0K to disk.
Jan 31 03:19:41 np0005603663 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 03:19:41 np0005603663 systemd[1]: Stopping sshd-keygen.target...
Jan 31 03:19:41 np0005603663 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 03:19:41 np0005603663 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 03:19:41 np0005603663 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 03:19:41 np0005603663 systemd[1]: Reached target sshd-keygen.target.
Jan 31 03:19:41 np0005603663 systemd[1]: Starting OpenSSH server daemon...
Jan 31 03:19:41 np0005603663 systemd[1]: Started OpenSSH server daemon.
Jan 31 03:19:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:19:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:19:43 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:19:43 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:19:43 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:43 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:43 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:44 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:19:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:19:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:19:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:19:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:19:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:19:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:45 np0005603663 podman[183634]: 2026-01-31 08:19:45.257583906 +0000 UTC m=+0.140189961 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:19:45 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:19:46 np0005603663 podman[185612]: 2026-01-31 08:19:46.474658483 +0000 UTC m=+0.016721461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:19:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:46 np0005603663 podman[185612]: 2026-01-31 08:19:46.930205886 +0000 UTC m=+0.472268824 container create c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:19:47 np0005603663 systemd[1]: Started libpod-conmon-c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9.scope.
Jan 31 03:19:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:19:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:19:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:19:47 np0005603663 podman[185612]: 2026-01-31 08:19:47.242920056 +0000 UTC m=+0.784983014 container init c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:19:47 np0005603663 podman[185612]: 2026-01-31 08:19:47.250301703 +0000 UTC m=+0.792364641 container start c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:19:47 np0005603663 festive_allen[186553]: 167 167
Jan 31 03:19:47 np0005603663 systemd[1]: libpod-c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9.scope: Deactivated successfully.
Jan 31 03:19:47 np0005603663 podman[185612]: 2026-01-31 08:19:47.394467915 +0000 UTC m=+0.936530873 container attach c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:19:47 np0005603663 podman[185612]: 2026-01-31 08:19:47.395176705 +0000 UTC m=+0.937239643 container died c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:19:47 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fbfaa65765c18f7038f2e6cf54d502a6f8a6e2f2f3babc28e3d55ae0fc407d56-merged.mount: Deactivated successfully.
Jan 31 03:19:47 np0005603663 podman[185612]: 2026-01-31 08:19:47.933334071 +0000 UTC m=+1.475397019 container remove c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:19:48 np0005603663 systemd[1]: libpod-conmon-c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9.scope: Deactivated successfully.
Jan 31 03:19:48 np0005603663 podman[187734]: 2026-01-31 08:19:48.108524345 +0000 UTC m=+0.099873218 container create b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:19:48 np0005603663 podman[187734]: 2026-01-31 08:19:48.028388102 +0000 UTC m=+0.019736955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:19:48 np0005603663 systemd[1]: Started libpod-conmon-b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268.scope.
Jan 31 03:19:48 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:19:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:48 np0005603663 podman[187774]: 2026-01-31 08:19:48.217181149 +0000 UTC m=+0.178565810 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:19:48 np0005603663 podman[187734]: 2026-01-31 08:19:48.370330153 +0000 UTC m=+0.361679006 container init b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:19:48 np0005603663 podman[187734]: 2026-01-31 08:19:48.379021958 +0000 UTC m=+0.370370791 container start b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:19:48 np0005603663 podman[187734]: 2026-01-31 08:19:48.442770889 +0000 UTC m=+0.434119752 container attach b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:19:48 np0005603663 compassionate_wilbur[187996]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:19:48 np0005603663 compassionate_wilbur[187996]: --> All data devices are unavailable
Jan 31 03:19:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:48 np0005603663 systemd[1]: libpod-b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268.scope: Deactivated successfully.
Jan 31 03:19:48 np0005603663 podman[187734]: 2026-01-31 08:19:48.858867584 +0000 UTC m=+0.850216427 container died b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:19:49 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df-merged.mount: Deactivated successfully.
Jan 31 03:19:49 np0005603663 podman[187734]: 2026-01-31 08:19:49.164067772 +0000 UTC m=+1.155416615 container remove b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:19:49 np0005603663 systemd[1]: libpod-conmon-b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268.scope: Deactivated successfully.
Jan 31 03:19:49 np0005603663 podman[189887]: 2026-01-31 08:19:49.578926072 +0000 UTC m=+0.060301146 container create 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:19:49 np0005603663 podman[189887]: 2026-01-31 08:19:49.535858571 +0000 UTC m=+0.017233665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:19:49 np0005603663 systemd[1]: Started libpod-conmon-561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf.scope.
Jan 31 03:19:49 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:19:49 np0005603663 podman[189887]: 2026-01-31 08:19:49.748858908 +0000 UTC m=+0.230234042 container init 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:19:49 np0005603663 podman[189887]: 2026-01-31 08:19:49.75569314 +0000 UTC m=+0.237068244 container start 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:19:49 np0005603663 distracted_mcnulty[190076]: 167 167
Jan 31 03:19:49 np0005603663 systemd[1]: libpod-561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf.scope: Deactivated successfully.
Jan 31 03:19:49 np0005603663 podman[189887]: 2026-01-31 08:19:49.903161475 +0000 UTC m=+0.384536589 container attach 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:19:49 np0005603663 podman[189887]: 2026-01-31 08:19:49.903533025 +0000 UTC m=+0.384908109 container died 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:19:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7bbbede171c708cbf57bab0953a4741631f5eba4cdd37be644667fa6e14b8e70-merged.mount: Deactivated successfully.
Jan 31 03:19:50 np0005603663 podman[189887]: 2026-01-31 08:19:50.271001874 +0000 UTC m=+0.752376958 container remove 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:19:50 np0005603663 systemd[1]: libpod-conmon-561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf.scope: Deactivated successfully.
Jan 31 03:19:50 np0005603663 podman[190501]: 2026-01-31 08:19:50.375382108 +0000 UTC m=+0.027198886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:19:50 np0005603663 podman[190501]: 2026-01-31 08:19:50.787976114 +0000 UTC m=+0.439792872 container create 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:19:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:50 np0005603663 systemd[1]: Started libpod-conmon-00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba.scope.
Jan 31 03:19:50 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:19:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:50 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:50 np0005603663 podman[190501]: 2026-01-31 08:19:50.904031546 +0000 UTC m=+0.555848324 container init 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:19:50 np0005603663 podman[190501]: 2026-01-31 08:19:50.912442612 +0000 UTC m=+0.564259340 container start 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:19:50 np0005603663 podman[190501]: 2026-01-31 08:19:50.933907776 +0000 UTC m=+0.585724604 container attach 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]: {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:    "0": [
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:        {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "devices": [
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "/dev/loop3"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            ],
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_name": "ceph_lv0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_size": "21470642176",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "name": "ceph_lv0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "tags": {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cluster_name": "ceph",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.crush_device_class": "",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.encrypted": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.objectstore": "bluestore",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osd_id": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.type": "block",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.vdo": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.with_tpm": "0"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            },
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "type": "block",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "vg_name": "ceph_vg0"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:        }
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:    ],
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:    "1": [
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:        {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "devices": [
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "/dev/loop4"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            ],
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_name": "ceph_lv1",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_size": "21470642176",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "name": "ceph_lv1",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "tags": {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cluster_name": "ceph",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.crush_device_class": "",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.encrypted": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.objectstore": "bluestore",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osd_id": "1",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.type": "block",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.vdo": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.with_tpm": "0"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            },
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "type": "block",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "vg_name": "ceph_vg1"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:        }
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:    ],
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:    "2": [
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:        {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "devices": [
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "/dev/loop5"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            ],
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_name": "ceph_lv2",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_size": "21470642176",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "name": "ceph_lv2",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "tags": {
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.cluster_name": "ceph",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.crush_device_class": "",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.encrypted": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.objectstore": "bluestore",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osd_id": "2",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.type": "block",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.vdo": "0",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:                "ceph.with_tpm": "0"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            },
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "type": "block",
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:            "vg_name": "ceph_vg2"
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:        }
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]:    ]
Jan 31 03:19:51 np0005603663 relaxed_chatterjee[190517]: }
Jan 31 03:19:51 np0005603663 systemd[1]: libpod-00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba.scope: Deactivated successfully.
Jan 31 03:19:51 np0005603663 podman[190501]: 2026-01-31 08:19:51.221478888 +0000 UTC m=+0.873295676 container died 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:19:51 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1-merged.mount: Deactivated successfully.
Jan 31 03:19:51 np0005603663 podman[190501]: 2026-01-31 08:19:51.54103835 +0000 UTC m=+1.192855108 container remove 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:19:51 np0005603663 systemd[1]: libpod-conmon-00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba.scope: Deactivated successfully.
Jan 31 03:19:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:51 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:19:51 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:19:51 np0005603663 systemd[1]: man-db-cache-update.service: Consumed 7.684s CPU time.
Jan 31 03:19:51 np0005603663 systemd[1]: run-r058dbcfb0e9246cab0572b18b149369a.service: Deactivated successfully.
Jan 31 03:19:51 np0005603663 python3.9[190693]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:19:51 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:51 np0005603663 podman[190757]: 2026-01-31 08:19:51.992511189 +0000 UTC m=+0.068949559 container create b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:19:52 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:52 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:52 np0005603663 podman[190757]: 2026-01-31 08:19:51.946104365 +0000 UTC m=+0.022542775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:19:52 np0005603663 systemd[1]: Started libpod-conmon-b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19.scope.
Jan 31 03:19:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:19:52 np0005603663 podman[190757]: 2026-01-31 08:19:52.246978782 +0000 UTC m=+0.323417192 container init b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:19:52 np0005603663 podman[190757]: 2026-01-31 08:19:52.251514149 +0000 UTC m=+0.327952519 container start b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:19:52 np0005603663 keen_kilby[190810]: 167 167
Jan 31 03:19:52 np0005603663 systemd[1]: libpod-b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19.scope: Deactivated successfully.
Jan 31 03:19:52 np0005603663 podman[190757]: 2026-01-31 08:19:52.266068728 +0000 UTC m=+0.342507108 container attach b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:19:52 np0005603663 podman[190757]: 2026-01-31 08:19:52.266975044 +0000 UTC m=+0.343413424 container died b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:19:52 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b656e9f857119f263029f74c354e410357fc9d608e7b1649fc28c29414d4e81b-merged.mount: Deactivated successfully.
Jan 31 03:19:52 np0005603663 podman[190757]: 2026-01-31 08:19:52.389888958 +0000 UTC m=+0.466327328 container remove b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:19:52 np0005603663 systemd[1]: libpod-conmon-b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19.scope: Deactivated successfully.
Jan 31 03:19:52 np0005603663 podman[190941]: 2026-01-31 08:19:52.513209194 +0000 UTC m=+0.051796436 container create e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:19:52 np0005603663 systemd[1]: Started libpod-conmon-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope.
Jan 31 03:19:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:19:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:52 np0005603663 podman[190941]: 2026-01-31 08:19:52.48672686 +0000 UTC m=+0.025314132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:19:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:52 np0005603663 podman[190941]: 2026-01-31 08:19:52.604177571 +0000 UTC m=+0.142764823 container init e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:19:52 np0005603663 podman[190941]: 2026-01-31 08:19:52.610281512 +0000 UTC m=+0.148869194 container start e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:19:52 np0005603663 podman[190941]: 2026-01-31 08:19:52.632843816 +0000 UTC m=+0.171431078 container attach e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:19:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:52 np0005603663 python3.9[191001]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:19:52 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:52 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:52 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:53 np0005603663 lvm[191120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:19:53 np0005603663 lvm[191120]: VG ceph_vg0 finished
Jan 31 03:19:53 np0005603663 lvm[191121]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:19:53 np0005603663 lvm[191121]: VG ceph_vg1 finished
Jan 31 03:19:53 np0005603663 lvm[191123]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:19:53 np0005603663 lvm[191123]: VG ceph_vg2 finished
Jan 31 03:19:53 np0005603663 brave_keldysh[191004]: {}
Jan 31 03:19:53 np0005603663 systemd[1]: libpod-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope: Deactivated successfully.
Jan 31 03:19:53 np0005603663 systemd[1]: libpod-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope: Consumed 1.002s CPU time.
Jan 31 03:19:53 np0005603663 podman[190941]: 2026-01-31 08:19:53.368867873 +0000 UTC m=+0.907455125 container died e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:19:53 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be-merged.mount: Deactivated successfully.
Jan 31 03:19:53 np0005603663 python3.9[191291]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:19:53 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:53 np0005603663 podman[190941]: 2026-01-31 08:19:53.979455095 +0000 UTC m=+1.518042337 container remove e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:19:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:19:54 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:54 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:19:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:19:54 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:19:54 np0005603663 systemd[1]: libpod-conmon-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope: Deactivated successfully.
Jan 31 03:19:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:54 np0005603663 python3.9[191506]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:19:54 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:55 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:55 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:19:55 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:19:55 np0005603663 python3.9[191695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:19:56 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:56 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:56 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:19:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:57 np0005603663 python3.9[191886]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:19:57 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:57 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:57 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:58 np0005603663 python3.9[192076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:19:58 np0005603663 systemd[1]: Reloading.
Jan 31 03:19:58 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:19:58 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:19:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:19:59 np0005603663 python3.9[192266]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:19:59 np0005603663 python3.9[192421]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:00 np0005603663 systemd[1]: Reloading.
Jan 31 03:20:00 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:20:00 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:20:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:01 np0005603663 python3.9[192611]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 03:20:01 np0005603663 systemd[1]: Reloading.
Jan 31 03:20:01 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:20:01 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:20:01 np0005603663 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 03:20:01 np0005603663 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 03:20:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:02 np0005603663 python3.9[192803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:03 np0005603663 python3.9[192958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:03 np0005603663 python3.9[193113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:04 np0005603663 python3.9[193268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:05 np0005603663 python3.9[193423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:06 np0005603663 python3.9[193578]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:07 np0005603663 python3.9[193733]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:07 np0005603663 python3.9[193888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:08 np0005603663 python3.9[194043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:09 np0005603663 python3.9[194198]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:10 np0005603663 python3.9[194353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:11 np0005603663 python3.9[194508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:12 np0005603663 python3.9[194663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:12 np0005603663 python3.9[194818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 03:20:13 np0005603663 python3.9[194973]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:20:14 np0005603663 python3.9[195125]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:20:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:14 np0005603663 python3.9[195277]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:20:15 np0005603663 podman[195429]: 2026-01-31 08:20:15.358126377 +0000 UTC m=+0.063190607 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:20:15 np0005603663 python3.9[195430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:20:16 np0005603663 python3.9[195607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:20:16 np0005603663 python3.9[195759]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:20:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:17 np0005603663 python3.9[195909]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:20:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:20:17.878 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:20:17.879 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:20:17.879 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:18 np0005603663 python3.9[196061]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:18 np0005603663 podman[196158]: 2026-01-31 08:20:18.736746686 +0000 UTC m=+0.073415904 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:20:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:18 np0005603663 python3.9[196203]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847617.5111723-557-228513413920331/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:19 np0005603663 python3.9[196357]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:19 np0005603663 python3.9[196482]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847619.0429628-557-11786905300880/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:20 np0005603663 python3.9[196634]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:20 np0005603663 python3.9[196759]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847620.0355444-557-90210697993411/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:21 np0005603663 python3.9[196911]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:22 np0005603663 python3.9[197036]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847621.1396127-557-262985101030338/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:22 np0005603663 python3.9[197188]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:23 np0005603663 python3.9[197313]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847622.2890234-557-59066174173586/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:23 np0005603663 python3.9[197465]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:24 np0005603663 python3.9[197590]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847623.3872836-557-39375791179528/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:24 np0005603663 python3.9[197742]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:25 np0005603663 python3.9[197865]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847624.5778432-557-273618214377861/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:26 np0005603663 python3.9[198017]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:26 np0005603663 python3.9[198142]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847625.6731327-557-128161599688947/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:27 np0005603663 python3.9[198294]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 03:20:27 np0005603663 python3.9[198447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:28 np0005603663 python3.9[198599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:29 np0005603663 python3.9[198751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:29 np0005603663 python3.9[198903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:30 np0005603663 python3.9[199055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:30 np0005603663 python3.9[199207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:31 np0005603663 python3.9[199359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:20:31
Jan 31 03:20:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:20:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:20:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Jan 31 03:20:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:20:31 np0005603663 python3.9[199511]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:32 np0005603663 python3.9[199663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:32 np0005603663 python3.9[199815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:20:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:20:33 np0005603663 python3.9[199967]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:33 np0005603663 python3.9[200119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:34 np0005603663 python3.9[200271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:35 np0005603663 python3.9[200423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:35 np0005603663 python3.9[200575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:36 np0005603663 python3.9[200698]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847635.198031-778-225041944612494/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:36 np0005603663 python3.9[200850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:37 np0005603663 python3.9[200973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847636.2677617-778-259874175054680/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:37 np0005603663 python3.9[201125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:38 np0005603663 python3.9[201248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847637.307086-778-187944831826310/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:38 np0005603663 python3.9[201400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:39 np0005603663 python3.9[201523]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847638.3523028-778-118226693732491/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:39 np0005603663 python3.9[201675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:40 np0005603663 python3.9[201798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847639.3471482-778-274815403422036/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:40 np0005603663 python3.9[201950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:41 np0005603663 python3.9[202073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847640.414436-778-153144207950861/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:41 np0005603663 python3.9[202225]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:42 np0005603663 python3.9[202348]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847641.551892-778-29050396467474/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:43 np0005603663 python3.9[202500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:20:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:20:43 np0005603663 python3.9[202623]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847642.6241682-778-168187885785000/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:44 np0005603663 python3.9[202775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:44 np0005603663 python3.9[202898]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847643.7442172-778-209186874680410/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:45 np0005603663 python3.9[203050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:45 np0005603663 podman[203145]: 2026-01-31 08:20:45.619232131 +0000 UTC m=+0.082832173 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:20:45 np0005603663 python3.9[203192]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847644.8442516-778-226616864293055/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:46 np0005603663 python3.9[203351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:46 np0005603663 python3.9[203474]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847645.9082956-778-198690324807758/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:47 np0005603663 python3.9[203626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:47 np0005603663 python3.9[203749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847646.9280689-778-147339654939247/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:48 np0005603663 python3.9[203901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:48 np0005603663 podman[203996]: 2026-01-31 08:20:48.883129627 +0000 UTC m=+0.065393574 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:20:49 np0005603663 python3.9[204043]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847648.1116261-778-43828169430092/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:49 np0005603663 python3.9[204195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:20:49 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 03:20:49 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:49.812456) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:20:49 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 03:20:49 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847649812484, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3584539, "memory_usage": 3648056, "flush_reason": "Manual Compaction"}
Jan 31 03:20:49 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650032696, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3508187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9788, "largest_seqno": 11832, "table_properties": {"data_size": 3498879, "index_size": 5930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17833, "raw_average_key_size": 19, "raw_value_size": 3480458, "raw_average_value_size": 3795, "num_data_blocks": 269, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847413, "oldest_key_time": 1769847413, "file_creation_time": 1769847649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 220343 microseconds, and 4530 cpu microseconds.
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.032792) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3508187 bytes OK
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.032817) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.053924) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054005) EVENT_LOG_v1 {"time_micros": 1769847650053995, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3575997, prev total WAL file size 3575997, number of live WAL files 2.
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054976) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3425KB)], [26(6457KB)]
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650055020, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10120320, "oldest_snapshot_seqno": -1}
Jan 31 03:20:50 np0005603663 python3.9[204318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847649.2118087-778-18053069276744/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3753 keys, 8432731 bytes, temperature: kUnknown
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650449141, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8432731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403364, "index_size": 18889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90282, "raw_average_key_size": 24, "raw_value_size": 8331319, "raw_average_value_size": 2219, "num_data_blocks": 817, "num_entries": 3753, "num_filter_entries": 3753, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847650, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.449362) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8432731 bytes
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.508302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.7 rd, 21.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 4267, records dropped: 514 output_compression: NoCompression
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.508357) EVENT_LOG_v1 {"time_micros": 1769847650508332, "job": 10, "event": "compaction_finished", "compaction_time_micros": 394170, "compaction_time_cpu_micros": 24468, "output_level": 6, "num_output_files": 1, "total_output_size": 8432731, "num_input_records": 4267, "num_output_records": 3753, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650508812, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650509480, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:50 np0005603663 python3.9[204468]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:20:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:51 np0005603663 python3.9[204623]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 03:20:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:54 np0005603663 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 03:20:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:20:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:20:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:20:55 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:20:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:20:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:20:56 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:20:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:20:57 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:20:57 np0005603663 podman[204772]: 2026-01-31 08:20:57.678683105 +0000 UTC m=+0.023956127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:20:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:20:59 np0005603663 podman[204772]: 2026-01-31 08:20:59.11268652 +0000 UTC m=+1.457959492 container create 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:20:59 np0005603663 systemd[1]: Started libpod-conmon-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope.
Jan 31 03:20:59 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:21:00 np0005603663 auditd[706]: Audit daemon rotating log files
Jan 31 03:21:00 np0005603663 python3.9[204942]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:00 np0005603663 podman[204772]: 2026-01-31 08:21:00.674841233 +0000 UTC m=+3.020114255 container init 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:00 np0005603663 podman[204772]: 2026-01-31 08:21:00.680007791 +0000 UTC m=+3.025280763 container start 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:21:00 np0005603663 infallible_meninsky[204788]: 167 167
Jan 31 03:21:00 np0005603663 systemd[1]: libpod-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope: Deactivated successfully.
Jan 31 03:21:00 np0005603663 conmon[204788]: conmon 32945d0d83b1a3e38149 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope/container/memory.events
Jan 31 03:21:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:21:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:21:00 np0005603663 podman[204772]: 2026-01-31 08:21:00.823723567 +0000 UTC m=+3.168996559 container attach 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:21:00 np0005603663 podman[204772]: 2026-01-31 08:21:00.824554751 +0000 UTC m=+3.169827713 container died 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:21:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:01 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8933966e7d79c518e07f119ecffa3d1452a29ead0eb174485b0d2011ecec3ea2-merged.mount: Deactivated successfully.
Jan 31 03:21:01 np0005603663 python3.9[205108]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:01 np0005603663 podman[204772]: 2026-01-31 08:21:01.566922099 +0000 UTC m=+3.912195091 container remove 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:21:01 np0005603663 systemd[1]: libpod-conmon-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope: Deactivated successfully.
Jan 31 03:21:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:01 np0005603663 podman[205268]: 2026-01-31 08:21:01.673394318 +0000 UTC m=+0.021986720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:21:01 np0005603663 python3.9[205260]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:01 np0005603663 podman[205268]: 2026-01-31 08:21:01.815272741 +0000 UTC m=+0.163865103 container create c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:21:02 np0005603663 systemd[1]: Started libpod-conmon-c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1.scope.
Jan 31 03:21:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:21:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:02 np0005603663 podman[205268]: 2026-01-31 08:21:02.278442365 +0000 UTC m=+0.627034747 container init c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:21:02 np0005603663 podman[205268]: 2026-01-31 08:21:02.286301309 +0000 UTC m=+0.634893701 container start c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:21:02 np0005603663 python3.9[205438]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:02 np0005603663 podman[205268]: 2026-01-31 08:21:02.440352931 +0000 UTC m=+0.788945313 container attach c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:21:02 np0005603663 kind_lovelace[205407]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:21:02 np0005603663 kind_lovelace[205407]: --> All data devices are unavailable
Jan 31 03:21:02 np0005603663 systemd[1]: libpod-c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1.scope: Deactivated successfully.
Jan 31 03:21:02 np0005603663 podman[205268]: 2026-01-31 08:21:02.776120776 +0000 UTC m=+1.124713138 container died c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:03 np0005603663 python3.9[205616]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:03 np0005603663 systemd[1]: var-lib-containers-storage-overlay-903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e-merged.mount: Deactivated successfully.
Jan 31 03:21:03 np0005603663 python3.9[205769]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:04 np0005603663 podman[205268]: 2026-01-31 08:21:04.553017859 +0000 UTC m=+2.901610251 container remove c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:21:04 np0005603663 systemd[1]: libpod-conmon-c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1.scope: Deactivated successfully.
Jan 31 03:21:04 np0005603663 python3.9[205921]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:05 np0005603663 podman[206008]: 2026-01-31 08:21:04.949878193 +0000 UTC m=+0.021766944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:21:05 np0005603663 podman[206008]: 2026-01-31 08:21:05.174824765 +0000 UTC m=+0.246713536 container create 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:21:05 np0005603663 python3.9[206149]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:05 np0005603663 systemd[1]: Started libpod-conmon-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope.
Jan 31 03:21:05 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:21:05 np0005603663 podman[206008]: 2026-01-31 08:21:05.669420638 +0000 UTC m=+0.741309399 container init 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:05 np0005603663 podman[206008]: 2026-01-31 08:21:05.679220519 +0000 UTC m=+0.751109290 container start 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:05 np0005603663 funny_dubinsky[206152]: 167 167
Jan 31 03:21:05 np0005603663 systemd[1]: libpod-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope: Deactivated successfully.
Jan 31 03:21:05 np0005603663 conmon[206152]: conmon 3f2e42e15da9d85641c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope/container/memory.events
Jan 31 03:21:05 np0005603663 podman[206008]: 2026-01-31 08:21:05.81163173 +0000 UTC m=+0.883520521 container attach 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:21:05 np0005603663 podman[206008]: 2026-01-31 08:21:05.812134145 +0000 UTC m=+0.884022916 container died 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:21:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5747cdaa4d27e9bd9e15a8409b313a0eb84e2b0120676318fb0f5fef53ef8ed1-merged.mount: Deactivated successfully.
Jan 31 03:21:06 np0005603663 python3.9[206320]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:06 np0005603663 python3.9[206473]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:07 np0005603663 podman[206008]: 2026-01-31 08:21:07.235805753 +0000 UTC m=+2.307694484 container remove 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:21:07 np0005603663 systemd[1]: libpod-conmon-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope: Deactivated successfully.
Jan 31 03:21:07 np0005603663 python3.9[206625]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:21:07 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:07 np0005603663 podman[206633]: 2026-01-31 08:21:07.377730327 +0000 UTC m=+0.025667766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:21:07 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:07 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:07 np0005603663 podman[206633]: 2026-01-31 08:21:07.560757219 +0000 UTC m=+0.208694628 container create fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:21:07 np0005603663 systemd[1]: Started libpod-conmon-fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11.scope.
Jan 31 03:21:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:21:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:07 np0005603663 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 03:21:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:07 np0005603663 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 03:21:07 np0005603663 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 03:21:07 np0005603663 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 03:21:07 np0005603663 systemd[1]: Starting libvirt logging daemon...
Jan 31 03:21:07 np0005603663 systemd[1]: Started libvirt logging daemon.
Jan 31 03:21:08 np0005603663 podman[206633]: 2026-01-31 08:21:08.030781327 +0000 UTC m=+0.678718696 container init fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:21:08 np0005603663 podman[206633]: 2026-01-31 08:21:08.052128779 +0000 UTC m=+0.700066148 container start fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]: {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:    "0": [
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:        {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "devices": [
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "/dev/loop3"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            ],
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_name": "ceph_lv0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_size": "21470642176",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "name": "ceph_lv0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "tags": {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cluster_name": "ceph",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.crush_device_class": "",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.encrypted": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.objectstore": "bluestore",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osd_id": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.type": "block",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.vdo": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.with_tpm": "0"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            },
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "type": "block",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "vg_name": "ceph_vg0"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:        }
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:    ],
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:    "1": [
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:        {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "devices": [
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "/dev/loop4"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            ],
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_name": "ceph_lv1",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_size": "21470642176",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "name": "ceph_lv1",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "tags": {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cluster_name": "ceph",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.crush_device_class": "",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.encrypted": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.objectstore": "bluestore",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osd_id": "1",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.type": "block",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.vdo": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.with_tpm": "0"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            },
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "type": "block",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "vg_name": "ceph_vg1"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:        }
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:    ],
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:    "2": [
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:        {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "devices": [
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "/dev/loop5"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            ],
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_name": "ceph_lv2",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_size": "21470642176",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "name": "ceph_lv2",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "tags": {
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.cluster_name": "ceph",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.crush_device_class": "",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.encrypted": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.objectstore": "bluestore",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osd_id": "2",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.type": "block",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.vdo": "0",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:                "ceph.with_tpm": "0"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            },
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "type": "block",
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:            "vg_name": "ceph_vg2"
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:        }
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]:    ]
Jan 31 03:21:08 np0005603663 mystifying_chaum[206686]: }
Jan 31 03:21:08 np0005603663 podman[206633]: 2026-01-31 08:21:08.33117509 +0000 UTC m=+0.979112539 container attach fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:21:08 np0005603663 systemd[1]: libpod-fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11.scope: Deactivated successfully.
Jan 31 03:21:08 np0005603663 podman[206852]: 2026-01-31 08:21:08.381615784 +0000 UTC m=+0.037758202 container died fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:08 np0005603663 python3.9[206851]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:21:08 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:08 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:08 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:09 np0005603663 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 03:21:09 np0005603663 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 03:21:09 np0005603663 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 03:21:09 np0005603663 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 03:21:09 np0005603663 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 03:21:09 np0005603663 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 03:21:09 np0005603663 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 03:21:09 np0005603663 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 03:21:09 np0005603663 systemd[1]: Started libvirt nodedev daemon.
Jan 31 03:21:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb-merged.mount: Deactivated successfully.
Jan 31 03:21:09 np0005603663 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 03:21:09 np0005603663 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 03:21:09 np0005603663 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 03:21:09 np0005603663 python3.9[207081]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:21:09 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:10 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:10 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:10 np0005603663 podman[206852]: 2026-01-31 08:21:10.129449315 +0000 UTC m=+1.785591713 container remove fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:21:10 np0005603663 systemd[1]: libpod-conmon-fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11.scope: Deactivated successfully.
Jan 31 03:21:10 np0005603663 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 03:21:10 np0005603663 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 03:21:10 np0005603663 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 03:21:10 np0005603663 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 03:21:10 np0005603663 systemd[1]: Starting libvirt proxy daemon...
Jan 31 03:21:10 np0005603663 systemd[1]: Started libvirt proxy daemon.
Jan 31 03:21:10 np0005603663 podman[207279]: 2026-01-31 08:21:10.56157366 +0000 UTC m=+0.022491075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:21:10 np0005603663 podman[207279]: 2026-01-31 08:21:10.673441043 +0000 UTC m=+0.134358428 container create 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:21:10 np0005603663 systemd[1]: Started libpod-conmon-3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8.scope.
Jan 31 03:21:10 np0005603663 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 20804ff9-c985-44af-91ff-dcb7ac8409b8
Jan 31 03:21:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:10 np0005603663 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 03:21:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:21:10 np0005603663 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 20804ff9-c985-44af-91ff-dcb7ac8409b8
Jan 31 03:21:10 np0005603663 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 03:21:11 np0005603663 python3.9[207376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:21:11 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:11 np0005603663 podman[207279]: 2026-01-31 08:21:11.091952828 +0000 UTC m=+0.552870223 container init 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:21:11 np0005603663 podman[207279]: 2026-01-31 08:21:11.097688282 +0000 UTC m=+0.558605677 container start 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:21:11 np0005603663 infallible_shockley[207380]: 167 167
Jan 31 03:21:11 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:11 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:11 np0005603663 podman[207279]: 2026-01-31 08:21:11.279462587 +0000 UTC m=+0.740379992 container attach 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:21:11 np0005603663 podman[207279]: 2026-01-31 08:21:11.280745444 +0000 UTC m=+0.741662829 container died 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:21:11 np0005603663 systemd[1]: libpod-3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8.scope: Deactivated successfully.
Jan 31 03:21:11 np0005603663 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 03:21:11 np0005603663 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 03:21:11 np0005603663 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 03:21:11 np0005603663 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 03:21:11 np0005603663 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 03:21:11 np0005603663 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 03:21:11 np0005603663 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 03:21:11 np0005603663 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 03:21:11 np0005603663 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 03:21:11 np0005603663 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 03:21:11 np0005603663 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 03:21:11 np0005603663 systemd[1]: Started libvirt QEMU daemon.
Jan 31 03:21:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-228773be1f05df366aa23daeb83d3d9e5a8e8e293e59ba1cb0d8170de0c1961d-merged.mount: Deactivated successfully.
Jan 31 03:21:12 np0005603663 python3.9[207611]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:21:12 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:12 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:12 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:12 np0005603663 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 03:21:12 np0005603663 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 03:21:12 np0005603663 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 03:21:12 np0005603663 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 03:21:12 np0005603663 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 03:21:12 np0005603663 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 03:21:12 np0005603663 systemd[1]: Starting libvirt secret daemon...
Jan 31 03:21:12 np0005603663 systemd[1]: Started libvirt secret daemon.
Jan 31 03:21:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:13 np0005603663 podman[207279]: 2026-01-31 08:21:13.015434108 +0000 UTC m=+2.476351493 container remove 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:13 np0005603663 systemd[1]: libpod-conmon-3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8.scope: Deactivated successfully.
Jan 31 03:21:13 np0005603663 podman[207830]: 2026-01-31 08:21:13.130423021 +0000 UTC m=+0.027398106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:21:13 np0005603663 python3.9[207824]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:13 np0005603663 podman[207830]: 2026-01-31 08:21:13.509343731 +0000 UTC m=+0.406318786 container create 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:21:13 np0005603663 systemd[1]: Started libpod-conmon-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope.
Jan 31 03:21:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:21:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:14 np0005603663 python3.9[207995]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 03:21:14 np0005603663 podman[207830]: 2026-01-31 08:21:14.108430897 +0000 UTC m=+1.005405992 container init 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:14 np0005603663 podman[207830]: 2026-01-31 08:21:14.114846361 +0000 UTC m=+1.011821406 container start 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:14 np0005603663 podman[207830]: 2026-01-31 08:21:14.119434312 +0000 UTC m=+1.016409417 container attach 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:21:14 np0005603663 python3.9[208164]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:21:14 np0005603663 lvm[208256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:21:14 np0005603663 lvm[208257]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:21:14 np0005603663 lvm[208257]: VG ceph_vg1 finished
Jan 31 03:21:14 np0005603663 lvm[208256]: VG ceph_vg0 finished
Jan 31 03:21:14 np0005603663 lvm[208259]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:21:14 np0005603663 lvm[208259]: VG ceph_vg2 finished
Jan 31 03:21:14 np0005603663 nervous_nobel[207998]: {}
Jan 31 03:21:14 np0005603663 systemd[1]: libpod-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope: Deactivated successfully.
Jan 31 03:21:14 np0005603663 systemd[1]: libpod-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope: Consumed 1.048s CPU time.
Jan 31 03:21:14 np0005603663 podman[207830]: 2026-01-31 08:21:14.844612808 +0000 UTC m=+1.741587863 container died 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:21:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00-merged.mount: Deactivated successfully.
Jan 31 03:21:14 np0005603663 podman[207830]: 2026-01-31 08:21:14.99450445 +0000 UTC m=+1.891479495 container remove 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:21:15 np0005603663 systemd[1]: libpod-conmon-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope: Deactivated successfully.
Jan 31 03:21:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:21:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:21:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:21:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:21:15 np0005603663 python3.9[208399]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 03:21:15 np0005603663 podman[208548]: 2026-01-31 08:21:15.750288162 +0000 UTC m=+0.076758679 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:15 np0005603663 python3.9[208585]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:21:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:21:16 np0005603663 python3.9[208721]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847675.4712029-1136-14053700092707/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9c2345731d8b82f59a4e13abe20e8b39f999829b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:16 np0005603663 python3.9[208873]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 82c880e6-d992-5408-8b12-efff9c275473#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:21:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:17 np0005603663 python3.9[209035]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:21:17.880 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:21:17.881 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:21:17.881 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:19 np0005603663 podman[209417]: 2026-01-31 08:21:19.183401023 +0000 UTC m=+0.067222186 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 03:21:19 np0005603663 python3.9[209517]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:20 np0005603663 python3.9[209669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:20 np0005603663 python3.9[209792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847679.709601-1191-26481978990601/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:20 np0005603663 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 03:21:20 np0005603663 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 03:21:21 np0005603663 python3.9[209944]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:22 np0005603663 python3.9[210096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:22 np0005603663 python3.9[210174]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:23 np0005603663 python3.9[210326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:23 np0005603663 python3.9[210404]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3hp1eybe recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:24 np0005603663 python3.9[210556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:24 np0005603663 python3.9[210634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:25 np0005603663 python3.9[210786]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:21:26 np0005603663 python3[210939]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 03:21:26 np0005603663 python3.9[211091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:27 np0005603663 python3.9[211169]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:27 np0005603663 python3.9[211321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:28 np0005603663 python3.9[211446]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847687.2365746-1280-166364935547996/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:28 np0005603663 python3.9[211598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:29 np0005603663 python3.9[211676]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:30 np0005603663 python3.9[211828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:30 np0005603663 python3.9[211906]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:31 np0005603663 python3.9[212058]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:31 np0005603663 python3.9[212183]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847690.5722914-1319-188979069630669/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:21:31
Jan 31 03:21:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:21:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:21:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms']
Jan 31 03:21:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:21:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:32 np0005603663 python3.9[212335]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:32 np0005603663 python3.9[212487]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:21:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:21:33 np0005603663 python3.9[212642]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:34 np0005603663 python3.9[212794]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:21:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:35 np0005603663 python3.9[212947]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:21:35 np0005603663 python3.9[213101]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:21:36 np0005603663 python3.9[213256]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:36 np0005603663 python3.9[213408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:37 np0005603663 python3.9[213531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847696.394935-1391-59840320344174/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:37 np0005603663 python3.9[213683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:38 np0005603663 python3.9[213806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847697.455447-1406-235554175796528/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:39 np0005603663 python3.9[213958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:21:39 np0005603663 python3.9[214081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847698.6476796-1421-220953191478066/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:21:40 np0005603663 python3.9[214233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:21:40 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:40 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:40 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:40 np0005603663 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 03:21:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:41 np0005603663 python3.9[214424]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 03:21:41 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:41 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:41 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:41 np0005603663 systemd[1]: Reloading.
Jan 31 03:21:41 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:21:41 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:21:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:42 np0005603663 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 03:21:42 np0005603663 systemd[1]: session-49.scope: Consumed 3min 1.777s CPU time.
Jan 31 03:21:42 np0005603663 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Jan 31 03:21:42 np0005603663 systemd-logind[793]: Removed session 49.
Jan 31 03:21:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:21:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:21:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:46 np0005603663 podman[214521]: 2026-01-31 08:21:46.224734341 +0000 UTC m=+0.113965099 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:21:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:47 np0005603663 systemd-logind[793]: New session 50 of user zuul.
Jan 31 03:21:47 np0005603663 systemd[1]: Started Session 50 of User zuul.
Jan 31 03:21:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:48 np0005603663 python3.9[214700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:21:49 np0005603663 podman[214828]: 2026-01-31 08:21:49.939063687 +0000 UTC m=+0.063467341 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:21:50 np0005603663 python3.9[214865]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:21:50 np0005603663 network[214889]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:21:50 np0005603663 network[214890]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:21:50 np0005603663 network[214891]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:21:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:53 np0005603663 python3.9[215163]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 03:21:54 np0005603663 python3.9[215247]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:21:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:21:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:21:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:00 np0005603663 python3.9[215400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:22:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:01 np0005603663 python3.9[215552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:22:02 np0005603663 python3.9[215705]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:22:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:02 np0005603663 python3.9[215857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:22:03 np0005603663 python3.9[216010]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:22:04 np0005603663 python3.9[216133]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847723.1160784-90-210799825723806/.source.iscsi _original_basename=.d_h60ozi follow=False checksum=dbd5f8f0b80c1618236eabb12e8c4b08e4109ee7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:05 np0005603663 python3.9[216285]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:05 np0005603663 python3.9[216437]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:07 np0005603663 python3.9[216589]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:07 np0005603663 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 03:22:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:07 np0005603663 python3.9[216745]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:08 np0005603663 systemd[1]: Reloading.
Jan 31 03:22:08 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:22:08 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:22:08 np0005603663 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 03:22:08 np0005603663 systemd[1]: Starting Open-iSCSI...
Jan 31 03:22:08 np0005603663 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 03:22:08 np0005603663 systemd[1]: Started Open-iSCSI.
Jan 31 03:22:08 np0005603663 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 03:22:08 np0005603663 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 03:22:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:09 np0005603663 python3.9[216944]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:22:09 np0005603663 network[216961]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:22:09 np0005603663 network[216962]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:22:09 np0005603663 network[216963]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:22:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:12 np0005603663 python3.9[217236]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:22:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:22:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:22:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:22:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:22:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.234234435 +0000 UTC m=+0.047332489 container create 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:22:16 np0005603663 systemd[1]: Started libpod-conmon-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope.
Jan 31 03:22:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.301802621 +0000 UTC m=+0.114900745 container init 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.213453642 +0000 UTC m=+0.026551746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.310804063 +0000 UTC m=+0.123902137 container start 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:22:16 np0005603663 silly_mendel[217416]: 167 167
Jan 31 03:22:16 np0005603663 systemd[1]: libpod-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope: Deactivated successfully.
Jan 31 03:22:16 np0005603663 conmon[217416]: conmon 18a35777ae391412d57c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope/container/memory.events
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.318802998 +0000 UTC m=+0.131901072 container attach 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.319632901 +0000 UTC m=+0.132730945 container died 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:22:16 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:22:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a067d2dca60cf88bd71b5ea51f7bb9ce05265b32c47332e29d811b524be83807-merged.mount: Deactivated successfully.
Jan 31 03:22:16 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:22:16 np0005603663 podman[217406]: 2026-01-31 08:22:16.359004456 +0000 UTC m=+0.100278405 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:22:16 np0005603663 podman[217392]: 2026-01-31 08:22:16.365103067 +0000 UTC m=+0.178201111 container remove 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:22:16 np0005603663 systemd[1]: libpod-conmon-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope: Deactivated successfully.
Jan 31 03:22:16 np0005603663 systemd[1]: Reloading.
Jan 31 03:22:16 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:22:16 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:22:16 np0005603663 podman[217487]: 2026-01-31 08:22:16.517565464 +0000 UTC m=+0.042523064 container create 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:22:16 np0005603663 podman[217487]: 2026-01-31 08:22:16.500110345 +0000 UTC m=+0.025067965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:22:16 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:22:16 np0005603663 systemd[1]: Started libpod-conmon-4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f.scope.
Jan 31 03:22:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:22:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:17 np0005603663 podman[217487]: 2026-01-31 08:22:17.005485254 +0000 UTC m=+0.530442904 container init 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:17 np0005603663 podman[217487]: 2026-01-31 08:22:17.01780579 +0000 UTC m=+0.542763420 container start 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:22:17 np0005603663 podman[217487]: 2026-01-31 08:22:17.22630738 +0000 UTC m=+0.751264990 container attach 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:22:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:17 np0005603663 amazing_mestorf[217628]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:22:17 np0005603663 amazing_mestorf[217628]: --> All data devices are unavailable
Jan 31 03:22:17 np0005603663 systemd[1]: libpod-4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f.scope: Deactivated successfully.
Jan 31 03:22:17 np0005603663 podman[217487]: 2026-01-31 08:22:17.418775671 +0000 UTC m=+0.943733291 container died 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:22:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2-merged.mount: Deactivated successfully.
Jan 31 03:22:17 np0005603663 podman[217487]: 2026-01-31 08:22:17.482334984 +0000 UTC m=+1.007292624 container remove 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:22:17 np0005603663 systemd[1]: libpod-conmon-4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f.scope: Deactivated successfully.
Jan 31 03:22:17 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:22:17 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:22:17 np0005603663 systemd[1]: run-r3855a2c52c774a828c0ff318b8195f6d.service: Deactivated successfully.
Jan 31 03:22:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:22:17.881 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:22:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:22:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:17 np0005603663 podman[217801]: 2026-01-31 08:22:17.887531043 +0000 UTC m=+0.047228636 container create 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:22:17 np0005603663 systemd[1]: Started libpod-conmon-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope.
Jan 31 03:22:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:22:17 np0005603663 podman[217801]: 2026-01-31 08:22:17.955966203 +0000 UTC m=+0.115663786 container init 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:22:17 np0005603663 podman[217801]: 2026-01-31 08:22:17.860149665 +0000 UTC m=+0.019847238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:22:17 np0005603663 podman[217801]: 2026-01-31 08:22:17.963857225 +0000 UTC m=+0.123554778 container start 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:22:17 np0005603663 podman[217801]: 2026-01-31 08:22:17.967432315 +0000 UTC m=+0.127129978 container attach 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:22:17 np0005603663 nostalgic_benz[217858]: 167 167
Jan 31 03:22:17 np0005603663 systemd[1]: libpod-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope: Deactivated successfully.
Jan 31 03:22:17 np0005603663 conmon[217858]: conmon 188f2cccf7e65f52b8e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope/container/memory.events
Jan 31 03:22:17 np0005603663 podman[217801]: 2026-01-31 08:22:17.971643893 +0000 UTC m=+0.131341496 container died 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:22:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5d829cbb91b6d9ba4d8863bf4a8c4d2e82a91bf79dcead2868ca7569490216b8-merged.mount: Deactivated successfully.
Jan 31 03:22:18 np0005603663 podman[217801]: 2026-01-31 08:22:18.008748254 +0000 UTC m=+0.168445807 container remove 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:22:18 np0005603663 systemd[1]: libpod-conmon-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope: Deactivated successfully.
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.126422036 +0000 UTC m=+0.045298302 container create 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:22:18 np0005603663 systemd[1]: Started libpod-conmon-2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952.scope.
Jan 31 03:22:18 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.103458531 +0000 UTC m=+0.022334857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:22:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:18 np0005603663 python3.9[217903]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.219689703 +0000 UTC m=+0.138565959 container init 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.223945922 +0000 UTC m=+0.142822158 container start 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.228649894 +0000 UTC m=+0.147526160 container attach 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]: {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:    "0": [
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:        {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "devices": [
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "/dev/loop3"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            ],
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_name": "ceph_lv0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_size": "21470642176",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "name": "ceph_lv0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "tags": {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cluster_name": "ceph",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.crush_device_class": "",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.encrypted": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.objectstore": "bluestore",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osd_id": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.type": "block",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.vdo": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.with_tpm": "0"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            },
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "type": "block",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "vg_name": "ceph_vg0"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:        }
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:    ],
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:    "1": [
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:        {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "devices": [
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "/dev/loop4"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            ],
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_name": "ceph_lv1",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_size": "21470642176",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "name": "ceph_lv1",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "tags": {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cluster_name": "ceph",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.crush_device_class": "",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.encrypted": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.objectstore": "bluestore",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osd_id": "1",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.type": "block",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.vdo": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.with_tpm": "0"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            },
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "type": "block",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "vg_name": "ceph_vg1"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:        }
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:    ],
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:    "2": [
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:        {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "devices": [
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "/dev/loop5"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            ],
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_name": "ceph_lv2",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_size": "21470642176",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "name": "ceph_lv2",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "tags": {
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.cluster_name": "ceph",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.crush_device_class": "",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.encrypted": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.objectstore": "bluestore",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osd_id": "2",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.type": "block",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.vdo": "0",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:                "ceph.with_tpm": "0"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            },
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "type": "block",
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:            "vg_name": "ceph_vg2"
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:        }
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]:    ]
Jan 31 03:22:18 np0005603663 beautiful_khorana[217927]: }
Jan 31 03:22:18 np0005603663 systemd[1]: libpod-2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952.scope: Deactivated successfully.
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.496590752 +0000 UTC m=+0.415466998 container died 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:22:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d-merged.mount: Deactivated successfully.
Jan 31 03:22:18 np0005603663 podman[217911]: 2026-01-31 08:22:18.532242862 +0000 UTC m=+0.451119098 container remove 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:18 np0005603663 systemd[1]: libpod-conmon-2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952.scope: Deactivated successfully.
Jan 31 03:22:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:18 np0005603663 podman[218162]: 2026-01-31 08:22:18.958113771 +0000 UTC m=+0.044546580 container create 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:22:18 np0005603663 python3.9[218148]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 03:22:18 np0005603663 systemd[1]: Started libpod-conmon-3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae.scope.
Jan 31 03:22:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:22:19 np0005603663 podman[218162]: 2026-01-31 08:22:19.035680658 +0000 UTC m=+0.122113557 container init 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:22:19 np0005603663 podman[218162]: 2026-01-31 08:22:18.941070573 +0000 UTC m=+0.027503402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:22:19 np0005603663 podman[218162]: 2026-01-31 08:22:19.044148915 +0000 UTC m=+0.130581734 container start 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:22:19 np0005603663 podman[218162]: 2026-01-31 08:22:19.048488747 +0000 UTC m=+0.134921566 container attach 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:19 np0005603663 nifty_antonelli[218182]: 167 167
Jan 31 03:22:19 np0005603663 systemd[1]: libpod-3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae.scope: Deactivated successfully.
Jan 31 03:22:19 np0005603663 podman[218162]: 2026-01-31 08:22:19.050865734 +0000 UTC m=+0.137298543 container died 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:19 np0005603663 systemd[1]: var-lib-containers-storage-overlay-86a04589ed30f2de77cfc02145014ddeed98f05d3e9a7fc44a3a5ea67c7e96e0-merged.mount: Deactivated successfully.
Jan 31 03:22:19 np0005603663 podman[218162]: 2026-01-31 08:22:19.10242214 +0000 UTC m=+0.188854959 container remove 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:19 np0005603663 systemd[1]: libpod-conmon-3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae.scope: Deactivated successfully.
Jan 31 03:22:19 np0005603663 podman[218256]: 2026-01-31 08:22:19.250285119 +0000 UTC m=+0.047902705 container create d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:19 np0005603663 systemd[1]: Started libpod-conmon-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope.
Jan 31 03:22:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:22:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:19 np0005603663 podman[218256]: 2026-01-31 08:22:19.23390907 +0000 UTC m=+0.031526676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:22:19 np0005603663 podman[218256]: 2026-01-31 08:22:19.344149433 +0000 UTC m=+0.141767039 container init d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:19 np0005603663 podman[218256]: 2026-01-31 08:22:19.348698651 +0000 UTC m=+0.146316237 container start d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:22:19 np0005603663 podman[218256]: 2026-01-31 08:22:19.351810928 +0000 UTC m=+0.149428524 container attach d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:19 np0005603663 python3.9[218379]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:22:19 np0005603663 lvm[218577]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:22:19 np0005603663 lvm[218576]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:22:19 np0005603663 lvm[218577]: VG ceph_vg1 finished
Jan 31 03:22:19 np0005603663 lvm[218576]: VG ceph_vg0 finished
Jan 31 03:22:19 np0005603663 lvm[218579]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:22:19 np0005603663 lvm[218579]: VG ceph_vg2 finished
Jan 31 03:22:20 np0005603663 python3.9[218568]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847739.176283-178-78956825028929/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:20 np0005603663 objective_ride[218322]: {}
Jan 31 03:22:20 np0005603663 podman[218580]: 2026-01-31 08:22:20.053353221 +0000 UTC m=+0.057944937 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:22:20 np0005603663 systemd[1]: libpod-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope: Deactivated successfully.
Jan 31 03:22:20 np0005603663 systemd[1]: libpod-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope: Consumed 1.020s CPU time.
Jan 31 03:22:20 np0005603663 podman[218256]: 2026-01-31 08:22:20.07507762 +0000 UTC m=+0.872695206 container died d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:20 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4-merged.mount: Deactivated successfully.
Jan 31 03:22:20 np0005603663 podman[218256]: 2026-01-31 08:22:20.245950045 +0000 UTC m=+1.043567631 container remove d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:22:20 np0005603663 systemd[1]: libpod-conmon-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope: Deactivated successfully.
Jan 31 03:22:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:22:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:22:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:22:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:22:20 np0005603663 python3.9[218793]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:22:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:22:21 np0005603663 python3.9[218945]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:22:21 np0005603663 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 03:22:21 np0005603663 systemd[1]: Stopped Load Kernel Modules.
Jan 31 03:22:21 np0005603663 systemd[1]: Stopping Load Kernel Modules...
Jan 31 03:22:21 np0005603663 systemd[1]: Starting Load Kernel Modules...
Jan 31 03:22:21 np0005603663 systemd[1]: Finished Load Kernel Modules.
Jan 31 03:22:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:22 np0005603663 python3.9[219101]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:22:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:22 np0005603663 python3.9[219254]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:22:23 np0005603663 python3.9[219406]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:22:24 np0005603663 python3.9[219529]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847743.1581905-229-165288016788793/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:24 np0005603663 python3.9[219681]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:22:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:25 np0005603663 python3.9[219834]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:26 np0005603663 python3.9[219986]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:26 np0005603663 python3.9[220138]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:27 np0005603663 python3.9[220290]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:27 np0005603663 python3.9[220442]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:28 np0005603663 python3.9[220594]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:28 np0005603663 python3.9[220746]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:29 np0005603663 python3.9[220898]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:22:30 np0005603663 python3.9[221052]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:22:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:30 np0005603663 python3.9[221205]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:31 np0005603663 systemd[1]: Listening on multipathd control socket.
Jan 31 03:22:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:22:31
Jan 31 03:22:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:22:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:22:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images']
Jan 31 03:22:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:22:31 np0005603663 python3.9[221361]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:31 np0005603663 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 03:22:31 np0005603663 udevadm[221366]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 03:22:31 np0005603663 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 03:22:31 np0005603663 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 03:22:31 np0005603663 multipathd[221369]: --------start up--------
Jan 31 03:22:31 np0005603663 multipathd[221369]: read /etc/multipath.conf
Jan 31 03:22:31 np0005603663 multipathd[221369]: path checkers start up
Jan 31 03:22:32 np0005603663 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 03:22:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:32 np0005603663 python3.9[221528]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:22:32 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:22:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:22:33 np0005603663 python3.9[221680]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 03:22:33 np0005603663 kernel: Key type psk registered
Jan 31 03:22:34 np0005603663 python3.9[221843]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:22:34 np0005603663 python3.9[221966]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847753.7828674-359-176824095711282/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:35 np0005603663 python3.9[222118]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:36 np0005603663 python3.9[222270]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:22:36 np0005603663 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 03:22:36 np0005603663 systemd[1]: Stopped Load Kernel Modules.
Jan 31 03:22:36 np0005603663 systemd[1]: Stopping Load Kernel Modules...
Jan 31 03:22:36 np0005603663 systemd[1]: Starting Load Kernel Modules...
Jan 31 03:22:36 np0005603663 systemd[1]: Finished Load Kernel Modules.
Jan 31 03:22:36 np0005603663 python3.9[222426]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 03:22:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:38 np0005603663 systemd[1]: Reloading.
Jan 31 03:22:38 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:22:38 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:22:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:39 np0005603663 systemd[1]: Reloading.
Jan 31 03:22:39 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:22:39 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:22:39 np0005603663 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 03:22:39 np0005603663 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 03:22:39 np0005603663 lvm[222543]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:22:39 np0005603663 lvm[222543]: VG ceph_vg2 finished
Jan 31 03:22:39 np0005603663 lvm[222541]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:22:39 np0005603663 lvm[222541]: VG ceph_vg0 finished
Jan 31 03:22:39 np0005603663 lvm[222540]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:22:39 np0005603663 lvm[222540]: VG ceph_vg1 finished
Jan 31 03:22:39 np0005603663 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 03:22:39 np0005603663 systemd[1]: Starting man-db-cache-update.service...
Jan 31 03:22:39 np0005603663 systemd[1]: Reloading.
Jan 31 03:22:39 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:22:39 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:22:39 np0005603663 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 03:22:40 np0005603663 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 03:22:40 np0005603663 systemd[1]: Finished man-db-cache-update.service.
Jan 31 03:22:40 np0005603663 systemd[1]: man-db-cache-update.service: Consumed 1.143s CPU time.
Jan 31 03:22:40 np0005603663 systemd[1]: run-rc6f0038b51bc4f9d87c5427b26522ed1.service: Deactivated successfully.
Jan 31 03:22:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:41 np0005603663 python3.9[223897]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:22:41 np0005603663 iscsid[216785]: iscsid shutting down.
Jan 31 03:22:41 np0005603663 systemd[1]: Stopping Open-iSCSI...
Jan 31 03:22:41 np0005603663 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 03:22:41 np0005603663 systemd[1]: Stopped Open-iSCSI.
Jan 31 03:22:41 np0005603663 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 03:22:41 np0005603663 systemd[1]: Starting Open-iSCSI...
Jan 31 03:22:41 np0005603663 systemd[1]: Started Open-iSCSI.
Jan 31 03:22:41 np0005603663 python3.9[224053]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:22:41 np0005603663 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 03:22:41 np0005603663 multipathd[221369]: exit (signal)
Jan 31 03:22:41 np0005603663 multipathd[221369]: --------shut down-------
Jan 31 03:22:41 np0005603663 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 03:22:41 np0005603663 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 03:22:41 np0005603663 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 03:22:42 np0005603663 multipathd[224060]: --------start up--------
Jan 31 03:22:42 np0005603663 multipathd[224060]: read /etc/multipath.conf
Jan 31 03:22:42 np0005603663 multipathd[224060]: path checkers start up
Jan 31 03:22:42 np0005603663 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 03:22:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:42 np0005603663 python3.9[224217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 03:22:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:22:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:22:43 np0005603663 python3.9[224373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:44 np0005603663 python3.9[224525]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:22:44 np0005603663 systemd[1]: Reloading.
Jan 31 03:22:44 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:22:44 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:22:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:45 np0005603663 python3.9[224710]: ansible-ansible.builtin.service_facts Invoked
Jan 31 03:22:45 np0005603663 network[224727]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 03:22:45 np0005603663 network[224728]: 'network-scripts' will be removed from distribution in near future.
Jan 31 03:22:45 np0005603663 network[224729]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 03:22:46 np0005603663 podman[224755]: 2026-01-31 08:22:46.542970642 +0000 UTC m=+0.126674646 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 03:22:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:49 np0005603663 python3.9[225028]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:49 np0005603663 python3.9[225181]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:50 np0005603663 podman[225265]: 2026-01-31 08:22:50.170994084 +0000 UTC m=+0.062520066 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:22:50 np0005603663 python3.9[225353]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:51 np0005603663 python3.9[225506]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:51 np0005603663 python3.9[225659]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:52 np0005603663 python3.9[225812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:53 np0005603663 python3.9[225965]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:54 np0005603663 python3.9[226118]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:22:54 np0005603663 python3.9[226271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:55 np0005603663 python3.9[226423]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:55 np0005603663 python3.9[226575]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:56 np0005603663 python3.9[226727]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:57 np0005603663 python3.9[226879]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:22:57 np0005603663 python3.9[227031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:58 np0005603663 python3.9[227183]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:22:58 np0005603663 python3.9[227335]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:22:59 np0005603663 python3.9[227487]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:00 np0005603663 python3.9[227639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:00 np0005603663 python3.9[227791]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:01 np0005603663 python3.9[227943]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:02 np0005603663 python3.9[228095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:02 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:02 np0005603663 python3.9[228247]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:03 np0005603663 python3.9[228399]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:03 np0005603663 python3.9[228551]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:04 np0005603663 python3.9[228703]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:05 np0005603663 python3.9[228855]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 03:23:06 np0005603663 python3.9[229007]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:23:06 np0005603663 systemd[1]: Reloading.
Jan 31 03:23:06 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:23:06 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:23:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:07 np0005603663 python3.9[229194]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:07 np0005603663 python3.9[229347]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:08 np0005603663 python3.9[229500]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:09 np0005603663 python3.9[229653]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:09 np0005603663 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 03:23:09 np0005603663 python3.9[229807]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:10 np0005603663 python3.9[229960]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:10 np0005603663 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 03:23:10 np0005603663 python3.9[230114]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:11 np0005603663 python3.9[230267]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 03:23:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:12 np0005603663 python3.9[230420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:13 np0005603663 python3.9[230572]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:13 np0005603663 python3.9[230724]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:14 np0005603663 python3.9[230876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:15 np0005603663 python3.9[231028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:15 np0005603663 python3.9[231180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:16 np0005603663 python3.9[231332]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:16 np0005603663 podman[231456]: 2026-01-31 08:23:16.849950479 +0000 UTC m=+0.099371859 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:23:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:17 np0005603663 python3.9[231499]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:17 np0005603663 python3.9[231662]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:23:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:23:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:23:17.883 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:18 np0005603663 python3.9[231814]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:18 np0005603663 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 03:23:18 np0005603663 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 03:23:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:20 np0005603663 podman[231865]: 2026-01-31 08:23:20.520729361 +0000 UTC m=+0.039376438 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 03:23:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:23:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.488010038 +0000 UTC m=+0.061480356 container create 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:23:21 np0005603663 systemd[1]: Started libpod-conmon-1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa.scope.
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.461489172 +0000 UTC m=+0.034959540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:23:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.573822204 +0000 UTC m=+0.147292572 container init 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.583013069 +0000 UTC m=+0.156483357 container start 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.586963653 +0000 UTC m=+0.160434041 container attach 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:23:21 np0005603663 quizzical_mcclintock[232019]: 167 167
Jan 31 03:23:21 np0005603663 systemd[1]: libpod-1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa.scope: Deactivated successfully.
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.589026173 +0000 UTC m=+0.162496451 container died 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:23:21 np0005603663 systemd[1]: var-lib-containers-storage-overlay-16acd85134a09d72651b6c8803bcffe62ad3e455b8552396ed74226143b7c0c0-merged.mount: Deactivated successfully.
Jan 31 03:23:21 np0005603663 podman[232003]: 2026-01-31 08:23:21.633881398 +0000 UTC m=+0.207351686 container remove 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:21 np0005603663 systemd[1]: libpod-conmon-1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa.scope: Deactivated successfully.
Jan 31 03:23:21 np0005603663 podman[232043]: 2026-01-31 08:23:21.763440497 +0000 UTC m=+0.049864020 container create 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:23:21 np0005603663 systemd[1]: Started libpod-conmon-80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f.scope.
Jan 31 03:23:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:21 np0005603663 podman[232043]: 2026-01-31 08:23:21.745748516 +0000 UTC m=+0.032172009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:23:21 np0005603663 podman[232043]: 2026-01-31 08:23:21.855888075 +0000 UTC m=+0.142311598 container init 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:23:21 np0005603663 podman[232043]: 2026-01-31 08:23:21.86332942 +0000 UTC m=+0.149752923 container start 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:23:21 np0005603663 podman[232043]: 2026-01-31 08:23:21.867919212 +0000 UTC m=+0.154342725 container attach 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:22 np0005603663 practical_brown[232060]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:23:22 np0005603663 practical_brown[232060]: --> All data devices are unavailable
Jan 31 03:23:22 np0005603663 systemd[1]: libpod-80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f.scope: Deactivated successfully.
Jan 31 03:23:22 np0005603663 podman[232043]: 2026-01-31 08:23:22.379149727 +0000 UTC m=+0.665573210 container died 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:23:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf-merged.mount: Deactivated successfully.
Jan 31 03:23:22 np0005603663 podman[232043]: 2026-01-31 08:23:22.422160968 +0000 UTC m=+0.708584441 container remove 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:23:22 np0005603663 systemd[1]: libpod-conmon-80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f.scope: Deactivated successfully.
Jan 31 03:23:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:22 np0005603663 podman[232152]: 2026-01-31 08:23:22.929528941 +0000 UTC m=+0.092338076 container create d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:23:22 np0005603663 podman[232152]: 2026-01-31 08:23:22.866390389 +0000 UTC m=+0.029199544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:23:22 np0005603663 systemd[1]: Started libpod-conmon-d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c.scope.
Jan 31 03:23:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:23 np0005603663 podman[232152]: 2026-01-31 08:23:23.000238002 +0000 UTC m=+0.163047157 container init d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:23:23 np0005603663 podman[232152]: 2026-01-31 08:23:23.00570883 +0000 UTC m=+0.168517975 container start d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:23:23 np0005603663 xenodochial_chatterjee[232220]: 167 167
Jan 31 03:23:23 np0005603663 systemd[1]: libpod-d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c.scope: Deactivated successfully.
Jan 31 03:23:23 np0005603663 podman[232152]: 2026-01-31 08:23:23.011155147 +0000 UTC m=+0.173964282 container attach d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:23:23 np0005603663 podman[232152]: 2026-01-31 08:23:23.011481497 +0000 UTC m=+0.174290632 container died d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-577854fa7570d133eabb64277ca42c6ea28791e585f83e9e34b6a080fde8e03a-merged.mount: Deactivated successfully.
Jan 31 03:23:23 np0005603663 podman[232152]: 2026-01-31 08:23:23.050619126 +0000 UTC m=+0.213428251 container remove d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:23:23 np0005603663 systemd[1]: libpod-conmon-d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c.scope: Deactivated successfully.
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.215642079 +0000 UTC m=+0.062623748 container create d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:23 np0005603663 systemd[1]: Started libpod-conmon-d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693.scope.
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.186697804 +0000 UTC m=+0.033679533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:23:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.33036106 +0000 UTC m=+0.177342779 container init d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.336229669 +0000 UTC m=+0.183211328 container start d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.340167633 +0000 UTC m=+0.187149302 container attach d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:23 np0005603663 python3.9[232337]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]: {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:    "0": [
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:        {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "devices": [
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "/dev/loop3"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            ],
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_name": "ceph_lv0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_size": "21470642176",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "name": "ceph_lv0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "tags": {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cluster_name": "ceph",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.crush_device_class": "",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.encrypted": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.objectstore": "bluestore",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osd_id": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.type": "block",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.vdo": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.with_tpm": "0"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            },
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "type": "block",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "vg_name": "ceph_vg0"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:        }
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:    ],
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:    "1": [
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:        {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "devices": [
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "/dev/loop4"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            ],
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_name": "ceph_lv1",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_size": "21470642176",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "name": "ceph_lv1",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "tags": {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cluster_name": "ceph",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.crush_device_class": "",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.encrypted": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.objectstore": "bluestore",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osd_id": "1",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.type": "block",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.vdo": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.with_tpm": "0"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            },
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "type": "block",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "vg_name": "ceph_vg1"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:        }
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:    ],
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:    "2": [
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:        {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "devices": [
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "/dev/loop5"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            ],
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_name": "ceph_lv2",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_size": "21470642176",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "name": "ceph_lv2",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "tags": {
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.cluster_name": "ceph",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.crush_device_class": "",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.encrypted": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.objectstore": "bluestore",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osd_id": "2",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.type": "block",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.vdo": "0",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:                "ceph.with_tpm": "0"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            },
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "type": "block",
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:            "vg_name": "ceph_vg2"
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:        }
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]:    ]
Jan 31 03:23:23 np0005603663 jolly_burnell[232332]: }
Jan 31 03:23:23 np0005603663 systemd[1]: libpod-d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693.scope: Deactivated successfully.
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.680690921 +0000 UTC m=+0.527672550 container died d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 03:23:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663-merged.mount: Deactivated successfully.
Jan 31 03:23:23 np0005603663 podman[232266]: 2026-01-31 08:23:23.74926989 +0000 UTC m=+0.596251529 container remove d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:23:23 np0005603663 systemd[1]: libpod-conmon-d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693.scope: Deactivated successfully.
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.195524698 +0000 UTC m=+0.051415825 container create ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:23:24 np0005603663 systemd[1]: Started libpod-conmon-ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a.scope.
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.17443121 +0000 UTC m=+0.030322447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:23:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.289240953 +0000 UTC m=+0.145132190 container init ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.29847615 +0000 UTC m=+0.154367297 container start ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.302617689 +0000 UTC m=+0.158508906 container attach ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 03:23:24 np0005603663 peaceful_carver[232586]: 167 167
Jan 31 03:23:24 np0005603663 systemd[1]: libpod-ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a.scope: Deactivated successfully.
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.304411431 +0000 UTC m=+0.160302578 container died ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:24 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6b06efc16cad4582e4429c39e1f81d8f00cadec2595719931b6f330ec2b5afef-merged.mount: Deactivated successfully.
Jan 31 03:23:24 np0005603663 podman[232548]: 2026-01-31 08:23:24.339322779 +0000 UTC m=+0.195213906 container remove ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:23:24 np0005603663 python3.9[232583]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 03:23:24 np0005603663 systemd[1]: libpod-conmon-ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a.scope: Deactivated successfully.
Jan 31 03:23:24 np0005603663 podman[232616]: 2026-01-31 08:23:24.461661739 +0000 UTC m=+0.034901008 container create 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:24 np0005603663 systemd[1]: Started libpod-conmon-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope.
Jan 31 03:23:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:24 np0005603663 podman[232616]: 2026-01-31 08:23:24.53374628 +0000 UTC m=+0.106985549 container init 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:23:24 np0005603663 podman[232616]: 2026-01-31 08:23:24.445511363 +0000 UTC m=+0.018750622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:23:24 np0005603663 podman[232616]: 2026-01-31 08:23:24.540384191 +0000 UTC m=+0.113623440 container start 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:23:24 np0005603663 podman[232616]: 2026-01-31 08:23:24.543721508 +0000 UTC m=+0.116960767 container attach 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:23:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:25 np0005603663 lvm[232863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:23:25 np0005603663 lvm[232864]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:23:25 np0005603663 lvm[232863]: VG ceph_vg0 finished
Jan 31 03:23:25 np0005603663 lvm[232864]: VG ceph_vg1 finished
Jan 31 03:23:25 np0005603663 lvm[232866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:23:25 np0005603663 lvm[232866]: VG ceph_vg2 finished
Jan 31 03:23:25 np0005603663 python3.9[232839]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 03:23:25 np0005603663 laughing_feynman[232657]: {}
Jan 31 03:23:25 np0005603663 systemd[1]: libpod-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope: Deactivated successfully.
Jan 31 03:23:25 np0005603663 systemd[1]: libpod-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope: Consumed 1.187s CPU time.
Jan 31 03:23:25 np0005603663 podman[232616]: 2026-01-31 08:23:25.356352671 +0000 UTC m=+0.929591940 container died 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:23:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435-merged.mount: Deactivated successfully.
Jan 31 03:23:25 np0005603663 podman[232616]: 2026-01-31 08:23:25.412715938 +0000 UTC m=+0.985955187 container remove 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:23:25 np0005603663 systemd[1]: libpod-conmon-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope: Deactivated successfully.
Jan 31 03:23:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:23:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:23:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:23:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:23:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:23:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:23:26 np0005603663 systemd-logind[793]: New session 51 of user zuul.
Jan 31 03:23:26 np0005603663 systemd[1]: Started Session 51 of User zuul.
Jan 31 03:23:26 np0005603663 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 03:23:26 np0005603663 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Jan 31 03:23:26 np0005603663 systemd-logind[793]: Removed session 51.
Jan 31 03:23:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:27 np0005603663 python3.9[233091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:27 np0005603663 python3.9[233212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847806.64989-986-120812274702130/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:28 np0005603663 python3.9[233362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:28 np0005603663 python3.9[233438]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:28 np0005603663 python3.9[233588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:29 np0005603663 python3.9[233709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847808.5453548-986-279288139863687/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:29 np0005603663 python3.9[233859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:30 np0005603663 python3.9[233980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847809.5444365-986-274071273058778/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:30 np0005603663 python3.9[234130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 03:23:31 np0005603663 python3.9[234251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847810.5386238-986-22877050462384/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:23:31
Jan 31 03:23:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:23:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:23:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', 'vms']
Jan 31 03:23:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:23:31 np0005603663 python3.9[234401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:32 np0005603663 python3.9[234522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847811.508902-986-115779033404361/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:23:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:23:33 np0005603663 python3.9[234674]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:33 np0005603663 python3.9[234826]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:34 np0005603663 python3.9[234978]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:23:34 np0005603663 python3.9[235130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 03:23:35 np0005603663 python3.9[235253]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769847814.4122767-1093-101739576691211/.source _original_basename=.cg9u3wcc follow=False checksum=1a407d9fba258306aa66616f39ab370f27391333 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 03:23:35 np0005603663 python3.9[235405]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:23:36 np0005603663 python3.9[235557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 03:23:37 np0005603663 python3.9[235678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847816.1355546-1119-77208351948412/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:37 np0005603663 python3.9[235828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 03:23:38 np0005603663 python3.9[235949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847817.2293115-1134-239044020794555/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 03:23:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 03:23:39 np0005603663 python3.9[236101]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 03:23:40 np0005603663 python3.9[236253]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 03:23:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 03:23:41 np0005603663 python3[236405]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 03:23:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:23:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:23:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 03:23:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:50 np0005603663 podman[236474]: 2026-01-31 08:23:50.28870944 +0000 UTC m=+3.171703859 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:23:50 np0005603663 podman[236418]: 2026-01-31 08:23:50.300622323 +0000 UTC m=+9.155412013 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 03:23:50 np0005603663 podman[236524]: 2026-01-31 08:23:50.43356181 +0000 UTC m=+0.060867938 container create b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:23:50 np0005603663 podman[236524]: 2026-01-31 08:23:50.403663437 +0000 UTC m=+0.030969575 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 03:23:50 np0005603663 python3[236405]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 03:23:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:51 np0005603663 podman[236686]: 2026-01-31 08:23:51.0554968 +0000 UTC m=+0.059523469 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 31 03:23:51 np0005603663 python3.9[236734]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:23:52 np0005603663 python3.9[236888]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.450585) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832450650, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1603, "num_deletes": 253, "total_data_size": 2684871, "memory_usage": 2729680, "flush_reason": "Manual Compaction"}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832459420, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1530477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11833, "largest_seqno": 13435, "table_properties": {"data_size": 1525100, "index_size": 2581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13606, "raw_average_key_size": 20, "raw_value_size": 1513278, "raw_average_value_size": 2238, "num_data_blocks": 119, "num_entries": 676, "num_filter_entries": 676, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847651, "oldest_key_time": 1769847651, "file_creation_time": 1769847832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 8906 microseconds, and 4784 cpu microseconds.
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.459490) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1530477 bytes OK
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.459518) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.461112) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.461147) EVENT_LOG_v1 {"time_micros": 1769847832461137, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.461187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2677933, prev total WAL file size 2677933, number of live WAL files 2.
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.462527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1494KB)], [29(8235KB)]
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832462587, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9963208, "oldest_snapshot_seqno": -1}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3994 keys, 7687150 bytes, temperature: kUnknown
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832522079, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7687150, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7658453, "index_size": 17579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95458, "raw_average_key_size": 23, "raw_value_size": 7584414, "raw_average_value_size": 1898, "num_data_blocks": 764, "num_entries": 3994, "num_filter_entries": 3994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.522478) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7687150 bytes
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.524028) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.2 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(11.5) write-amplify(5.0) OK, records in: 4429, records dropped: 435 output_compression: NoCompression
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.524069) EVENT_LOG_v1 {"time_micros": 1769847832524051, "job": 12, "event": "compaction_finished", "compaction_time_micros": 59599, "compaction_time_cpu_micros": 26695, "output_level": 6, "num_output_files": 1, "total_output_size": 7687150, "num_input_records": 4429, "num_output_records": 3994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832524504, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832526294, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.462399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:52 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:52 np0005603663 python3.9[237040]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 03:23:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:53 np0005603663 python3[237192]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 03:23:54 np0005603663 podman[237229]: 2026-01-31 08:23:54.003662936 +0000 UTC m=+0.060824906 container create ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:23:54 np0005603663 podman[237229]: 2026-01-31 08:23:53.965652539 +0000 UTC m=+0.022814579 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 03:23:54 np0005603663 python3[237192]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 03:23:54 np0005603663 python3.9[237421]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:23:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:55 np0005603663 python3.9[237575]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:56 np0005603663 python3.9[237726]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847835.5164196-1230-193594826925030/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 03:23:56 np0005603663 python3.9[237802]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 03:23:56 np0005603663 systemd[1]: Reloading.
Jan 31 03:23:56 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:23:56 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:23:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:23:57 np0005603663 python3.9[237913]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 03:23:57 np0005603663 systemd[1]: Reloading.
Jan 31 03:23:57 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:23:57 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:23:57 np0005603663 systemd[1]: Starting nova_compute container...
Jan 31 03:23:58 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:23:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:58 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:58 np0005603663 podman[237953]: 2026-01-31 08:23:58.089686953 +0000 UTC m=+0.100461621 container init ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 03:23:58 np0005603663 podman[237953]: 2026-01-31 08:23:58.094550933 +0000 UTC m=+0.105325561 container start ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + sudo -E kolla_set_configs
Jan 31 03:23:58 np0005603663 podman[237953]: nova_compute
Jan 31 03:23:58 np0005603663 systemd[1]: Started nova_compute container.
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Validating config file
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying service configuration files
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Deleting /etc/ceph
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Creating directory /etc/ceph
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Writing out command to execute
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:23:58 np0005603663 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 03:23:58 np0005603663 nova_compute[237968]: ++ cat /run_command
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + CMD=nova-compute
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + ARGS=
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + sudo kolla_copy_cacerts
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + [[ ! -n '' ]]
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + . kolla_extend_start
Jan 31 03:23:58 np0005603663 nova_compute[237968]: Running command: 'nova-compute'
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + umask 0022
Jan 31 03:23:58 np0005603663 nova_compute[237968]: + exec nova-compute
Jan 31 03:23:58 np0005603663 python3.9[238129]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:23:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:23:59 np0005603663 python3.9[238280]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:24:00 np0005603663 python3.9[238430]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 03:24:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:01 np0005603663 python3.9[238582]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 03:24:01 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.266 237972 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.267 237972 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.267 237972 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.267 237972 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.415 237972 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.431 237972 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:01 np0005603663 nova_compute[237968]: 2026-01-31 08:24:01.432 237972 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:24:01 np0005603663 python3.9[238762]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 03:24:02 np0005603663 systemd[1]: Stopping nova_compute container...
Jan 31 03:24:02 np0005603663 nova_compute[237968]: 2026-01-31 08:24:02.096 237972 INFO nova.virt.driver [None req-d9f5c17e-8a43-4d95-bf06-9a10e6ca4464 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 03:24:02 np0005603663 systemd[1]: libpod-ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662.scope: Deactivated successfully.
Jan 31 03:24:02 np0005603663 systemd[1]: libpod-ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662.scope: Consumed 2.439s CPU time.
Jan 31 03:24:02 np0005603663 podman[238766]: 2026-01-31 08:24:02.120731132 +0000 UTC m=+0.073125022 container died ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Jan 31 03:24:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662-userdata-shm.mount: Deactivated successfully.
Jan 31 03:24:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833-merged.mount: Deactivated successfully.
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:04 np0005603663 podman[238766]: 2026-01-31 08:24:04.72109384 +0000 UTC m=+2.673487690 container cleanup ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:24:04 np0005603663 podman[238766]: nova_compute
Jan 31 03:24:04 np0005603663 podman[238796]: nova_compute
Jan 31 03:24:04 np0005603663 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 03:24:04 np0005603663 systemd[1]: Stopped nova_compute container.
Jan 31 03:24:04 np0005603663 systemd[1]: Starting nova_compute container...
Jan 31 03:24:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:05 np0005603663 podman[238809]: 2026-01-31 08:24:05.115511513 +0000 UTC m=+0.298416744 container init ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:24:05 np0005603663 podman[238809]: 2026-01-31 08:24:05.123039461 +0000 UTC m=+0.305944632 container start ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + sudo -E kolla_set_configs
Jan 31 03:24:05 np0005603663 podman[238809]: nova_compute
Jan 31 03:24:05 np0005603663 systemd[1]: Started nova_compute container.
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Validating config file
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying service configuration files
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /etc/ceph
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Creating directory /etc/ceph
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Writing out command to execute
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:24:05 np0005603663 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 03:24:05 np0005603663 nova_compute[238824]: ++ cat /run_command
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + CMD=nova-compute
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + ARGS=
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + sudo kolla_copy_cacerts
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + [[ ! -n '' ]]
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + . kolla_extend_start
Jan 31 03:24:05 np0005603663 nova_compute[238824]: Running command: 'nova-compute'
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + umask 0022
Jan 31 03:24:05 np0005603663 nova_compute[238824]: + exec nova-compute
Jan 31 03:24:05 np0005603663 python3.9[238987]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 03:24:06 np0005603663 systemd[1]: Started libpod-conmon-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713.scope.
Jan 31 03:24:06 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:06 np0005603663 podman[239012]: 2026-01-31 08:24:06.170609935 +0000 UTC m=+0.140070654 container init b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:24:06 np0005603663 podman[239012]: 2026-01-31 08:24:06.17668499 +0000 UTC m=+0.146145699 container start b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:24:06 np0005603663 python3.9[238987]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 03:24:06 np0005603663 nova_compute_init[239034]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 03:24:06 np0005603663 systemd[1]: libpod-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713.scope: Deactivated successfully.
Jan 31 03:24:06 np0005603663 podman[239035]: 2026-01-31 08:24:06.248621466 +0000 UTC m=+0.033244320 container died b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:24:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713-userdata-shm.mount: Deactivated successfully.
Jan 31 03:24:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7-merged.mount: Deactivated successfully.
Jan 31 03:24:06 np0005603663 podman[239046]: 2026-01-31 08:24:06.323854827 +0000 UTC m=+0.081246106 container cleanup b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:24:06 np0005603663 systemd[1]: libpod-conmon-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713.scope: Deactivated successfully.
Jan 31 03:24:06 np0005603663 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 03:24:06 np0005603663 systemd[1]: session-50.scope: Consumed 1min 50.117s CPU time.
Jan 31 03:24:06 np0005603663 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Jan 31 03:24:06 np0005603663 systemd-logind[793]: Removed session 50.
Jan 31 03:24:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.171 238828 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.171 238828 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.171 238828 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.172 238828 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.306 238828 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.329 238828 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.329 238828 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:24:07 np0005603663 nova_compute[238824]: 2026-01-31 08:24:07.833 238828 INFO nova.virt.driver [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.018 238828 INFO nova.compute.provider_config [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.075 238828 DEBUG oslo_concurrency.lockutils [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.076 238828 DEBUG oslo_concurrency.lockutils [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.076 238828 DEBUG oslo_concurrency.lockutils [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.077 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.077 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.078 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.078 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.078 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.080 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.080 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.080 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.081 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.081 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.081 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.082 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.082 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.082 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.083 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.083 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.083 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.084 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.084 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.084 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.085 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.085 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.085 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.086 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.086 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.086 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.087 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.087 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.087 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.088 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.088 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.088 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.089 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.089 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.089 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.090 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.090 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.090 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.091 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.091 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.091 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.092 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.092 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.092 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.093 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.093 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.093 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.094 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.094 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.094 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.095 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.095 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.095 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.097 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.097 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.097 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.099 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.099 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.099 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.100 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.100 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.100 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.101 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.101 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.101 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.102 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.102 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.102 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.103 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.103 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.103 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.104 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.104 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.104 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 WARNING oslo_config.cfg [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 03:24:08 np0005603663 nova_compute[238824]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 03:24:08 np0005603663 nova_compute[238824]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 03:24:08 np0005603663 nova_compute[238824]: and ``live_migration_inbound_addr`` respectively.
Jan 31 03:24:08 np0005603663 nova_compute[238824]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_secret_uuid        = 82c880e6-d992-5408-8b12-efff9c275473 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.235 238828 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.255 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.256 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.256 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.257 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 03:24:08 np0005603663 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 03:24:08 np0005603663 systemd[1]: Started libvirt QEMU daemon.
Jan 31 03:24:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.315 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6572815070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.318 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6572815070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.318 238828 INFO nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.366 238828 WARNING nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 03:24:08 np0005603663 nova_compute[238824]: 2026-01-31 08:24:08.367 238828 DEBUG nova.virt.libvirt.volume.mount [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 31 03:24:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.268 238828 INFO nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <host>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <uuid>2848852e-0b64-43df-9df3-1c9bd96fb83b</uuid>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <arch>x86_64</arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model>EPYC-Rome-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <vendor>AMD</vendor>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <microcode version='16777317'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <signature family='23' model='49' stepping='0'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='x2apic'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='tsc-deadline'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='osxsave'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='hypervisor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='tsc_adjust'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='spec-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='stibp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='arch-capabilities'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='cmp_legacy'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='topoext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='virt-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='lbrv'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='tsc-scale'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='vmcb-clean'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='pause-filter'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='pfthreshold'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='svme-addr-chk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='rdctl-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='skip-l1dfl-vmentry'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='mds-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature name='pschange-mc-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <pages unit='KiB' size='4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <pages unit='KiB' size='2048'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <pages unit='KiB' size='1048576'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <power_management>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <suspend_mem/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </power_management>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <iommu support='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <migration_features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <live/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <uri_transports>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <uri_transport>tcp</uri_transport>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <uri_transport>rdma</uri_transport>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </uri_transports>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </migration_features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <topology>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <cells num='1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <cell id='0'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          <memory unit='KiB'>7864296</memory>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          <pages unit='KiB' size='4'>1966074</pages>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          <pages unit='KiB' size='2048'>0</pages>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          <distances>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <sibling id='0' value='10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          </distances>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          <cpus num='8'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:          </cpus>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        </cell>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </cells>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </topology>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <cache>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </cache>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <secmodel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model>selinux</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <doi>0</doi>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </secmodel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <secmodel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model>dac</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <doi>0</doi>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </secmodel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </host>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <guest>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <os_type>hvm</os_type>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <arch name='i686'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <wordsize>32</wordsize>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <domain type='qemu'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <domain type='kvm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <pae/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <nonpae/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <acpi default='on' toggle='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <apic default='on' toggle='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <cpuselection/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <deviceboot/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <disksnapshot default='on' toggle='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <externalSnapshot/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </guest>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <guest>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <os_type>hvm</os_type>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <arch name='x86_64'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <wordsize>64</wordsize>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <domain type='qemu'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <domain type='kvm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <acpi default='on' toggle='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <apic default='on' toggle='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <cpuselection/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <deviceboot/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <disksnapshot default='on' toggle='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <externalSnapshot/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </guest>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 
Jan 31 03:24:09 np0005603663 nova_compute[238824]: </capabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: #033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.275 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.294 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 03:24:09 np0005603663 nova_compute[238824]: <domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <domain>kvm</domain>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <arch>i686</arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <vcpu max='240'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <iothreads supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <os supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='firmware'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <loader supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>rom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pflash</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='readonly'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>yes</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='secure'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </loader>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </os>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-passthrough' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='hostPassthroughMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='maximum' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='maximumMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-model' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <vendor>AMD</vendor>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='x2apic'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='hypervisor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='stibp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='overflow-recov'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='succor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lbrv'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-scale'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='flushbyasid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pause-filter'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pfthreshold'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='disable' name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='custom' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Dhyana-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v6'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v7'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <memoryBacking supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='sourceType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>anonymous</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>memfd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </memoryBacking>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <disk supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='diskDevice'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>disk</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cdrom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>floppy</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>lun</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ide</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>fdc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>sata</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </disk>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <graphics supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vnc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egl-headless</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </graphics>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <video supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='modelType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vga</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cirrus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>none</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>bochs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ramfb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </video>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hostdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='mode'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>subsystem</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='startupPolicy'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>mandatory</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>requisite</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>optional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='subsysType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pci</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='capsType'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='pciBackend'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hostdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <rng supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>random</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </rng>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <filesystem supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='driverType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>path</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>handle</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtiofs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </filesystem>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tpm supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-tis</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-crb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emulator</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>external</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendVersion'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>2.0</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </tpm>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <redirdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </redirdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <channel supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </channel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <crypto supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </crypto>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <interface supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>passt</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </interface>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <panic supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>isa</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>hyperv</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </panic>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <console supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>null</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dev</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pipe</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stdio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>udp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tcp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu-vdagent</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </console>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <gic supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <vmcoreinfo supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <genid supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backingStoreInput supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backup supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <async-teardown supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <s390-pv supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <ps2 supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tdx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sev supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sgx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hyperv supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='features'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>relaxed</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vapic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>spinlocks</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vpindex</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>runtime</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>synic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stimer</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reset</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vendor_id</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>frequencies</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reenlightenment</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tlbflush</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ipi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>avic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emsr_bitmap</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>xmm_input</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <spinlocks>4095</spinlocks>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <stimer_direct>on</stimer_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hyperv>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <launchSecurity supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: </domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.300 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 03:24:09 np0005603663 nova_compute[238824]: <domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <domain>kvm</domain>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <arch>i686</arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <vcpu max='4096'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <iothreads supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <os supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='firmware'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <loader supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>rom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pflash</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='readonly'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>yes</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='secure'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </loader>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </os>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-passthrough' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='hostPassthroughMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='maximum' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='maximumMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-model' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <vendor>AMD</vendor>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='x2apic'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='hypervisor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='stibp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='overflow-recov'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='succor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lbrv'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-scale'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='flushbyasid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pause-filter'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pfthreshold'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='disable' name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='custom' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Dhyana-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v6'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v7'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <memoryBacking supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='sourceType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>anonymous</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>memfd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </memoryBacking>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <disk supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='diskDevice'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>disk</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cdrom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>floppy</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>lun</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>fdc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>sata</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </disk>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <graphics supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vnc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egl-headless</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </graphics>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <video supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='modelType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vga</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cirrus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>none</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>bochs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ramfb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </video>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hostdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='mode'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>subsystem</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='startupPolicy'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>mandatory</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>requisite</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>optional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='subsysType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pci</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='capsType'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='pciBackend'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hostdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <rng supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>random</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </rng>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <filesystem supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='driverType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>path</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>handle</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtiofs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </filesystem>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tpm supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-tis</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-crb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emulator</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>external</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendVersion'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>2.0</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </tpm>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <redirdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </redirdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <channel supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </channel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <crypto supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </crypto>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <interface supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>passt</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </interface>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <panic supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>isa</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>hyperv</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </panic>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <console supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>null</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dev</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pipe</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stdio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>udp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tcp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu-vdagent</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </console>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <gic supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <vmcoreinfo supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <genid supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backingStoreInput supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backup supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <async-teardown supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <s390-pv supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <ps2 supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tdx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sev supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sgx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hyperv supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='features'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>relaxed</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vapic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>spinlocks</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vpindex</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>runtime</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>synic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stimer</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reset</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vendor_id</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>frequencies</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reenlightenment</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tlbflush</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ipi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>avic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emsr_bitmap</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>xmm_input</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <spinlocks>4095</spinlocks>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <stimer_direct>on</stimer_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hyperv>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <launchSecurity supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: </domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.345 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.350 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 03:24:09 np0005603663 nova_compute[238824]: <domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <domain>kvm</domain>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <arch>x86_64</arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <vcpu max='240'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <iothreads supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <os supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='firmware'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <loader supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>rom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pflash</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='readonly'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>yes</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='secure'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </loader>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </os>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-passthrough' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='hostPassthroughMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='maximum' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='maximumMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-model' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <vendor>AMD</vendor>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='x2apic'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='hypervisor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='stibp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='overflow-recov'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='succor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lbrv'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-scale'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='flushbyasid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pause-filter'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pfthreshold'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='disable' name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='custom' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Dhyana-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v6'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v7'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <memoryBacking supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='sourceType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>anonymous</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>memfd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </memoryBacking>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <disk supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='diskDevice'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>disk</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cdrom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>floppy</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>lun</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ide</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>fdc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>sata</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </disk>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <graphics supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vnc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egl-headless</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </graphics>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <video supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='modelType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vga</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cirrus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>none</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>bochs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ramfb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </video>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hostdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='mode'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>subsystem</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='startupPolicy'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>mandatory</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>requisite</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>optional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='subsysType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pci</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='capsType'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='pciBackend'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hostdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <rng supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>random</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </rng>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <filesystem supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='driverType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>path</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>handle</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtiofs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </filesystem>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tpm supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-tis</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-crb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emulator</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>external</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendVersion'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>2.0</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </tpm>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <redirdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </redirdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <channel supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </channel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <crypto supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </crypto>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <interface supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>passt</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </interface>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <panic supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>isa</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>hyperv</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </panic>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <console supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>null</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dev</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pipe</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stdio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>udp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tcp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu-vdagent</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </console>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <gic supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <vmcoreinfo supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <genid supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backingStoreInput supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backup supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <async-teardown supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <s390-pv supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <ps2 supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tdx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sev supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sgx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hyperv supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='features'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>relaxed</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vapic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>spinlocks</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vpindex</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>runtime</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>synic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stimer</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reset</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vendor_id</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>frequencies</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reenlightenment</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tlbflush</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ipi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>avic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emsr_bitmap</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>xmm_input</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <spinlocks>4095</spinlocks>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <stimer_direct>on</stimer_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hyperv>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <launchSecurity supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: </domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.409 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 03:24:09 np0005603663 nova_compute[238824]: <domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <domain>kvm</domain>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <arch>x86_64</arch>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <vcpu max='4096'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <iothreads supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <os supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='firmware'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>efi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <loader supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>rom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pflash</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='readonly'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>yes</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='secure'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>yes</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>no</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </loader>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </os>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-passthrough' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='hostPassthroughMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='maximum' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='maximumMigratable'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>on</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>off</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='host-model' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <vendor>AMD</vendor>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='x2apic'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='hypervisor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='stibp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='overflow-recov'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='succor'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lbrv'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='tsc-scale'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='flushbyasid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pause-filter'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='pfthreshold'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <feature policy='disable' name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <mode name='custom' supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Broadwell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='ClearwaterForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ddpd-u'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sha512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm3'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sm4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Cooperlake-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Denverton-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Dhyana-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Milan-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Rome-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-Turin-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amd-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='auto-ibrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vp2intersect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fs-gs-base-ns'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibpb-brtype'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='no-nested-data-bp'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='null-sel-clr-base'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='perfmon-v2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbpb'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='srso-user-kernel-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='stibp-always-on'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='EPYC-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='GraniteRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-128'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-256'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx10-512'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='prefetchiti'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Haswell-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v6'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Icelake-Server-v7'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='IvyBridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='KnightsMill-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4fmaps'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-4vnniw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512er'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512pf'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G4-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Opteron_G5-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fma4'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tbm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xop'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SapphireRapids-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='amx-tile'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-bf16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-fp16'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512-vpopcntdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bitalg'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vbmi2'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrc'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fzrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='la57'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='taa-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='tsx-ldtrk'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='SierraForest-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ifma'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-ne-convert'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx-vnni-int8'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bhi-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='bus-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cmpccxadd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fbsdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='fsrs'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ibrs-all'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='intel-psfd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ipred-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='lam'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mcdt-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pbrsb-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='psdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rrsba-ctrl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='sbdr-ssdp-no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='serialize'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vaes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='vpclmulqdq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Client-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='hle'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='rtm'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Skylake-Server-v5'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512bw'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512cd'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512dq'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512f'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='avx512vl'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='invpcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pcid'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='pku'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='mpx'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v2'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v3'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='core-capability'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='split-lock-detect'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='Snowridge-v4'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='cldemote'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='erms'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='gfni'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdir64b'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='movdiri'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='xsaves'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='athlon-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='core2duo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='coreduo-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='n270-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='ss'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <blockers model='phenom-v1'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnow'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <feature name='3dnowext'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </blockers>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </mode>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </cpu>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <memoryBacking supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <enum name='sourceType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>anonymous</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <value>memfd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </memoryBacking>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <disk supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='diskDevice'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>disk</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cdrom</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>floppy</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>lun</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>fdc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>sata</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </disk>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <graphics supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vnc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egl-headless</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </graphics>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <video supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='modelType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vga</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>cirrus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>none</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>bochs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ramfb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </video>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hostdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='mode'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>subsystem</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='startupPolicy'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>mandatory</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>requisite</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>optional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='subsysType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pci</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>scsi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='capsType'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='pciBackend'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hostdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <rng supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtio-non-transitional</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>random</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>egd</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </rng>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <filesystem supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='driverType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>path</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>handle</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>virtiofs</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </filesystem>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tpm supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-tis</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tpm-crb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emulator</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>external</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendVersion'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>2.0</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </tpm>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <redirdev supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='bus'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>usb</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </redirdev>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <channel supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </channel>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <crypto supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendModel'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>builtin</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </crypto>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <interface supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='backendType'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>default</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>passt</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </interface>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <panic supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='model'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>isa</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>hyperv</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </panic>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <console supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='type'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>null</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vc</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pty</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dev</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>file</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>pipe</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stdio</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>udp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tcp</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>unix</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>qemu-vdagent</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>dbus</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </console>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </devices>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  <features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <gic supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <vmcoreinfo supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <genid supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backingStoreInput supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <backup supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <async-teardown supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <s390-pv supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <ps2 supported='yes'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <tdx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sev supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <sgx supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <hyperv supported='yes'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <enum name='features'>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>relaxed</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vapic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>spinlocks</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vpindex</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>runtime</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>synic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>stimer</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reset</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>vendor_id</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>frequencies</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>reenlightenment</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>tlbflush</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>ipi</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>avic</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>emsr_bitmap</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <value>xmm_input</value>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </enum>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      <defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <spinlocks>4095</spinlocks>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <stimer_direct>on</stimer_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:      </defaults>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    </hyperv>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:    <launchSecurity supported='no'/>
Jan 31 03:24:09 np0005603663 nova_compute[238824]:  </features>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: </domainCapabilities>
Jan 31 03:24:09 np0005603663 nova_compute[238824]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.468 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.469 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.469 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.473 238828 INFO nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Secure Boot support detected#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.474 238828 INFO nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.474 238828 INFO nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.483 238828 DEBUG nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.791 238828 INFO nova.virt.node [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Determined node identity 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from /var/lib/nova/compute_id#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.824 238828 WARNING nova.compute.manager [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Compute nodes ['6d4ff98f-eb37-47a1-bfaf-01e7f5329d98'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 31 03:24:09 np0005603663 nova_compute[238824]: 2026-01-31 08:24:09.914 238828 INFO nova.compute.manager [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.016 238828 WARNING nova.compute.manager [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.017 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.017 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.018 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.018 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.019 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:24:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3323564417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.725 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:10 np0005603663 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 03:24:10 np0005603663 systemd[1]: Started libvirt nodedev daemon.
Jan 31 03:24:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.957 238828 WARNING nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.958 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.958 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.958 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.973 238828 WARNING nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] No compute node record for compute-0.ctlplane.example.com:6d4ff98f-eb37-47a1-bfaf-01e7f5329d98: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 could not be found.#033[00m
Jan 31 03:24:10 np0005603663 nova_compute[238824]: 2026-01-31 08:24:10.994 238828 INFO nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98#033[00m
Jan 31 03:24:11 np0005603663 nova_compute[238824]: 2026-01-31 08:24:11.061 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:24:11 np0005603663 nova_compute[238824]: 2026-01-31 08:24:11.062 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:24:12 np0005603663 nova_compute[238824]: 2026-01-31 08:24:12.055 238828 INFO nova.scheduler.client.report [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] [req-cd6794b3-ca5f-493c-ba06-b8a85a3a3273] Created resource provider record via placement API for resource provider with UUID 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 and name compute-0.ctlplane.example.com.#033[00m
Jan 31 03:24:12 np0005603663 nova_compute[238824]: 2026-01-31 08:24:12.437 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:24:13 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229754555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.079 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.084 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 03:24:13 np0005603663 nova_compute[238824]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.085 238828 INFO nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.086 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.086 238828 DEBUG nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.137 238828 DEBUG nova.scheduler.client.report [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updated inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.137 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.137 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.230 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.255 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.255 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.256 238828 DEBUG nova.service [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 31 03:24:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.349 238828 DEBUG nova.service [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 31 03:24:13 np0005603663 nova_compute[238824]: 2026-01-31 08:24:13.350 238828 DEBUG nova.servicegroup.drivers.db [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 31 03:24:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:24:17.883 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:24:17.884 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:24:17.884 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:21 np0005603663 podman[239235]: 2026-01-31 08:24:21.182269263 +0000 UTC m=+0.071487534 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 03:24:21 np0005603663 podman[239234]: 2026-01-31 08:24:21.203052563 +0000 UTC m=+0.097182226 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 03:24:21 np0005603663 nova_compute[238824]: 2026-01-31 08:24:21.352 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:21 np0005603663 nova_compute[238824]: 2026-01-31 08:24:21.373 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.487398064 +0000 UTC m=+0.057620484 container create 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:24:26 np0005603663 systemd[1]: Started libpod-conmon-13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771.scope.
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.459459788 +0000 UTC m=+0.029682258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:24:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.591826538 +0000 UTC m=+0.162048968 container init 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.600009334 +0000 UTC m=+0.170231754 container start 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.604525485 +0000 UTC m=+0.174747905 container attach 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:24:26 np0005603663 focused_carson[239436]: 167 167
Jan 31 03:24:26 np0005603663 systemd[1]: libpod-13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771.scope: Deactivated successfully.
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.608286393 +0000 UTC m=+0.178508803 container died 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:24:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:24:26 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5fa9843d05484237daea77e45143acdeb16995fd0e7423b0d9d0f53819a35edf-merged.mount: Deactivated successfully.
Jan 31 03:24:26 np0005603663 podman[239422]: 2026-01-31 08:24:26.653481027 +0000 UTC m=+0.223703417 container remove 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:24:26 np0005603663 systemd[1]: libpod-conmon-13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771.scope: Deactivated successfully.
Jan 31 03:24:26 np0005603663 podman[239458]: 2026-01-31 08:24:26.840223707 +0000 UTC m=+0.055441581 container create ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:24:26 np0005603663 systemd[1]: Started libpod-conmon-ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824.scope.
Jan 31 03:24:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:26 np0005603663 podman[239458]: 2026-01-31 08:24:26.817559983 +0000 UTC m=+0.032777947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:24:26 np0005603663 podman[239458]: 2026-01-31 08:24:26.937779313 +0000 UTC m=+0.152997277 container init ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:24:26 np0005603663 podman[239458]: 2026-01-31 08:24:26.945735352 +0000 UTC m=+0.160953236 container start ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:24:26 np0005603663 podman[239458]: 2026-01-31 08:24:26.949676846 +0000 UTC m=+0.164894740 container attach ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:24:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:27 np0005603663 hopeful_haslett[239474]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:24:27 np0005603663 hopeful_haslett[239474]: --> All data devices are unavailable
Jan 31 03:24:27 np0005603663 systemd[1]: libpod-ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824.scope: Deactivated successfully.
Jan 31 03:24:27 np0005603663 podman[239458]: 2026-01-31 08:24:27.406032537 +0000 UTC m=+0.621250451 container died ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:24:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6-merged.mount: Deactivated successfully.
Jan 31 03:24:27 np0005603663 podman[239458]: 2026-01-31 08:24:27.461036874 +0000 UTC m=+0.676254798 container remove ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:24:27 np0005603663 systemd[1]: libpod-conmon-ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824.scope: Deactivated successfully.
Jan 31 03:24:27 np0005603663 podman[239567]: 2026-01-31 08:24:27.978347975 +0000 UTC m=+0.082709429 container create 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:24:28 np0005603663 podman[239567]: 2026-01-31 08:24:27.928548157 +0000 UTC m=+0.032909641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:24:28 np0005603663 systemd[1]: Started libpod-conmon-0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e.scope.
Jan 31 03:24:28 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:28 np0005603663 podman[239567]: 2026-01-31 08:24:28.219443862 +0000 UTC m=+0.323805336 container init 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:24:28 np0005603663 podman[239567]: 2026-01-31 08:24:28.224422806 +0000 UTC m=+0.328784260 container start 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:24:28 np0005603663 pensive_bell[239584]: 167 167
Jan 31 03:24:28 np0005603663 systemd[1]: libpod-0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e.scope: Deactivated successfully.
Jan 31 03:24:28 np0005603663 podman[239567]: 2026-01-31 08:24:28.270902827 +0000 UTC m=+0.375264281 container attach 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 03:24:28 np0005603663 podman[239567]: 2026-01-31 08:24:28.271530705 +0000 UTC m=+0.375892169 container died 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:24:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5c589394c3c5824bcc0fe5c89fa0e1b77033e63991d2618df2115cea26966399-merged.mount: Deactivated successfully.
Jan 31 03:24:28 np0005603663 podman[239567]: 2026-01-31 08:24:28.758425088 +0000 UTC m=+0.862786562 container remove 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:24:28 np0005603663 systemd[1]: libpod-conmon-0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e.scope: Deactivated successfully.
Jan 31 03:24:28 np0005603663 podman[239609]: 2026-01-31 08:24:28.956667179 +0000 UTC m=+0.092879101 container create 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:24:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:28 np0005603663 podman[239609]: 2026-01-31 08:24:28.885843225 +0000 UTC m=+0.022055147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:24:29 np0005603663 systemd[1]: Started libpod-conmon-59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8.scope.
Jan 31 03:24:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:29 np0005603663 podman[239609]: 2026-01-31 08:24:29.08109045 +0000 UTC m=+0.217302412 container init 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:24:29 np0005603663 podman[239609]: 2026-01-31 08:24:29.089462252 +0000 UTC m=+0.225674154 container start 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:24:29 np0005603663 podman[239609]: 2026-01-31 08:24:29.09426436 +0000 UTC m=+0.230476262 container attach 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]: {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:    "0": [
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:        {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "devices": [
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "/dev/loop3"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            ],
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_name": "ceph_lv0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_size": "21470642176",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "name": "ceph_lv0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "tags": {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cluster_name": "ceph",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.crush_device_class": "",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.encrypted": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.objectstore": "bluestore",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osd_id": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.type": "block",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.vdo": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.with_tpm": "0"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            },
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "type": "block",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "vg_name": "ceph_vg0"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:        }
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:    ],
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:    "1": [
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:        {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "devices": [
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "/dev/loop4"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            ],
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_name": "ceph_lv1",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_size": "21470642176",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "name": "ceph_lv1",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "tags": {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cluster_name": "ceph",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.crush_device_class": "",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.encrypted": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.objectstore": "bluestore",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osd_id": "1",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.type": "block",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.vdo": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.with_tpm": "0"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            },
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "type": "block",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "vg_name": "ceph_vg1"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:        }
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:    ],
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:    "2": [
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:        {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "devices": [
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "/dev/loop5"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            ],
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_name": "ceph_lv2",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_size": "21470642176",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "name": "ceph_lv2",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "tags": {
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.cluster_name": "ceph",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.crush_device_class": "",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.encrypted": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.objectstore": "bluestore",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osd_id": "2",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.type": "block",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.vdo": "0",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:                "ceph.with_tpm": "0"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            },
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "type": "block",
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:            "vg_name": "ceph_vg2"
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:        }
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]:    ]
Jan 31 03:24:29 np0005603663 naughty_kirch[239625]: }
Jan 31 03:24:29 np0005603663 systemd[1]: libpod-59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8.scope: Deactivated successfully.
Jan 31 03:24:29 np0005603663 podman[239609]: 2026-01-31 08:24:29.375804816 +0000 UTC m=+0.512016718 container died 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:24:29 np0005603663 systemd[1]: var-lib-containers-storage-overlay-575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e-merged.mount: Deactivated successfully.
Jan 31 03:24:29 np0005603663 podman[239609]: 2026-01-31 08:24:29.424844021 +0000 UTC m=+0.561055903 container remove 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:24:29 np0005603663 systemd[1]: libpod-conmon-59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8.scope: Deactivated successfully.
Jan 31 03:24:29 np0005603663 podman[239708]: 2026-01-31 08:24:29.892376875 +0000 UTC m=+0.035693842 container create b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:24:29 np0005603663 systemd[1]: Started libpod-conmon-b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21.scope.
Jan 31 03:24:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:29 np0005603663 podman[239708]: 2026-01-31 08:24:29.956063353 +0000 UTC m=+0.099380380 container init b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:24:29 np0005603663 podman[239708]: 2026-01-31 08:24:29.964035133 +0000 UTC m=+0.107352150 container start b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:24:29 np0005603663 sad_goodall[239724]: 167 167
Jan 31 03:24:29 np0005603663 systemd[1]: libpod-b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21.scope: Deactivated successfully.
Jan 31 03:24:29 np0005603663 podman[239708]: 2026-01-31 08:24:29.970381266 +0000 UTC m=+0.113698263 container attach b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:24:29 np0005603663 podman[239708]: 2026-01-31 08:24:29.970959233 +0000 UTC m=+0.114276250 container died b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:24:29 np0005603663 podman[239708]: 2026-01-31 08:24:29.877801134 +0000 UTC m=+0.021118141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:24:30 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6d76ec5c6d83dd2621d2d776e47f563e89dd42bb906306bd4fe677ac091aeb5b-merged.mount: Deactivated successfully.
Jan 31 03:24:30 np0005603663 podman[239708]: 2026-01-31 08:24:30.023085477 +0000 UTC m=+0.166402494 container remove b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:24:30 np0005603663 systemd[1]: libpod-conmon-b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21.scope: Deactivated successfully.
Jan 31 03:24:30 np0005603663 podman[239747]: 2026-01-31 08:24:30.185242237 +0000 UTC m=+0.047217484 container create 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 03:24:30 np0005603663 systemd[1]: Started libpod-conmon-689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5.scope.
Jan 31 03:24:30 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:24:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:30 np0005603663 podman[239747]: 2026-01-31 08:24:30.160974697 +0000 UTC m=+0.022949964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:24:30 np0005603663 podman[239747]: 2026-01-31 08:24:30.264901526 +0000 UTC m=+0.126876743 container init 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:24:30 np0005603663 podman[239747]: 2026-01-31 08:24:30.270403935 +0000 UTC m=+0.132379142 container start 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:24:30 np0005603663 podman[239747]: 2026-01-31 08:24:30.27406187 +0000 UTC m=+0.136037077 container attach 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886002857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886002857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1466788001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1466788001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:24:30 np0005603663 lvm[239840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:24:30 np0005603663 lvm[239840]: VG ceph_vg0 finished
Jan 31 03:24:30 np0005603663 lvm[239843]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:24:30 np0005603663 lvm[239843]: VG ceph_vg1 finished
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834280013' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:24:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834280013' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:24:30 np0005603663 lvm[239845]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:24:30 np0005603663 lvm[239845]: VG ceph_vg2 finished
Jan 31 03:24:30 np0005603663 lvm[239846]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:24:30 np0005603663 lvm[239846]: VG ceph_vg1 finished
Jan 31 03:24:30 np0005603663 lvm[239848]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:24:30 np0005603663 lvm[239848]: VG ceph_vg1 finished
Jan 31 03:24:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:30 np0005603663 zen_meninsky[239764]: {}
Jan 31 03:24:31 np0005603663 systemd[1]: libpod-689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5.scope: Deactivated successfully.
Jan 31 03:24:31 np0005603663 podman[239747]: 2026-01-31 08:24:31.004811011 +0000 UTC m=+0.866786218 container died 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:24:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf-merged.mount: Deactivated successfully.
Jan 31 03:24:31 np0005603663 podman[239747]: 2026-01-31 08:24:31.049448689 +0000 UTC m=+0.911423906 container remove 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:24:31 np0005603663 systemd[1]: libpod-conmon-689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5.scope: Deactivated successfully.
Jan 31 03:24:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:24:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:24:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:24:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:24:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:24:31
Jan 31 03:24:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:24:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:24:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'vms', 'volumes', '.rgw.root']
Jan 31 03:24:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:24:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:24:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:24:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:24:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:24:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:24:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:52 np0005603663 podman[239888]: 2026-01-31 08:24:52.155967129 +0000 UTC m=+0.051951501 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:24:52 np0005603663 podman[239887]: 2026-01-31 08:24:52.187279713 +0000 UTC m=+0.083285375 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 03:24:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:24:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:24:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:00 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:02 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:04 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:06 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.342 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.342 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.343 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.343 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.436 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.437 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.437 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.437 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.529 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.530 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.530 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.530 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:25:07 np0005603663 nova_compute[238824]: 2026-01-31 08:25:07.531 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:25:08 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4215308270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.055 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.203 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.204 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5180MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.204 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.205 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.478 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.479 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:25:08 np0005603663 nova_compute[238824]: 2026-01-31 08:25:08.516 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:08 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:25:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120157718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:25:09 np0005603663 nova_compute[238824]: 2026-01-31 08:25:09.048 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:09 np0005603663 nova_compute[238824]: 2026-01-31 08:25:09.053 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:25:09 np0005603663 nova_compute[238824]: 2026-01-31 08:25:09.106 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:25:09 np0005603663 nova_compute[238824]: 2026-01-31 08:25:09.172 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:25:09 np0005603663 nova_compute[238824]: 2026-01-31 08:25:09.172 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:10 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:12 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:14 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:16 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 03:25:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2262470798' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 03:25:17 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 03:25:17 np0005603663 ceph-mgr[75519]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 03:25:17 np0005603663 ceph-mgr[75519]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 03:25:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:25:17.884 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:25:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:25:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:18 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:20 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:22 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:23 np0005603663 podman[239975]: 2026-01-31 08:25:23.174824261 +0000 UTC m=+0.068640050 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:25:23 np0005603663 podman[239976]: 2026-01-31 08:25:23.174875973 +0000 UTC m=+0.058778443 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:25:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:24 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:26 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:28 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:30 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:25:31
Jan 31 03:25:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:25:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:25:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms']
Jan 31 03:25:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:25:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:25:32 np0005603663 podman[240165]: 2026-01-31 08:25:32.114641982 +0000 UTC m=+0.018733166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:25:32 np0005603663 podman[240165]: 2026-01-31 08:25:32.30788034 +0000 UTC m=+0.211971504 container create 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:25:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:25:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:25:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:25:32 np0005603663 systemd[1]: Started libpod-conmon-2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d.scope.
Jan 31 03:25:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:32 np0005603663 podman[240165]: 2026-01-31 08:25:32.855760976 +0000 UTC m=+0.759852180 container init 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:25:32 np0005603663 podman[240165]: 2026-01-31 08:25:32.862814411 +0000 UTC m=+0.766905585 container start 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:25:32 np0005603663 elated_pasteur[240184]: 167 167
Jan 31 03:25:32 np0005603663 systemd[1]: libpod-2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d.scope: Deactivated successfully.
Jan 31 03:25:32 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:25:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:25:33 np0005603663 podman[240165]: 2026-01-31 08:25:33.085202108 +0000 UTC m=+0.989293302 container attach 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:25:33 np0005603663 podman[240165]: 2026-01-31 08:25:33.086892808 +0000 UTC m=+0.990983992 container died 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 03:25:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-520b4e5d87dbc55e9a6d354c7cfa130e4032296f4c556e0a82fe8ede27de3688-merged.mount: Deactivated successfully.
Jan 31 03:25:34 np0005603663 podman[240165]: 2026-01-31 08:25:34.959936008 +0000 UTC m=+2.864027162 container remove 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:25:34 np0005603663 systemd[1]: libpod-conmon-2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d.scope: Deactivated successfully.
Jan 31 03:25:34 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:35 np0005603663 podman[240208]: 2026-01-31 08:25:35.061209718 +0000 UTC m=+0.025607297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:25:35 np0005603663 podman[240208]: 2026-01-31 08:25:35.187651431 +0000 UTC m=+0.152048980 container create 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:25:35 np0005603663 systemd[1]: Started libpod-conmon-5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6.scope.
Jan 31 03:25:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:25:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:35 np0005603663 podman[240208]: 2026-01-31 08:25:35.896102754 +0000 UTC m=+0.860500353 container init 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:35 np0005603663 podman[240208]: 2026-01-31 08:25:35.902739267 +0000 UTC m=+0.867136836 container start 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:25:36 np0005603663 podman[240208]: 2026-01-31 08:25:36.083861231 +0000 UTC m=+1.048258800 container attach 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:36 np0005603663 friendly_darwin[240224]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:25:36 np0005603663 friendly_darwin[240224]: --> All data devices are unavailable
Jan 31 03:25:36 np0005603663 systemd[1]: libpod-5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6.scope: Deactivated successfully.
Jan 31 03:25:36 np0005603663 podman[240208]: 2026-01-31 08:25:36.317451614 +0000 UTC m=+1.281849213 container died 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:25:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369-merged.mount: Deactivated successfully.
Jan 31 03:25:36 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:37 np0005603663 podman[240208]: 2026-01-31 08:25:37.035176336 +0000 UTC m=+1.999573875 container remove 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:25:37 np0005603663 systemd[1]: libpod-conmon-5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6.scope: Deactivated successfully.
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.484538854 +0000 UTC m=+0.062898053 container create 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.449535564 +0000 UTC m=+0.027894823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:25:37 np0005603663 systemd[1]: Started libpod-conmon-1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6.scope.
Jan 31 03:25:37 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.669236343 +0000 UTC m=+0.247595522 container init 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.675264308 +0000 UTC m=+0.253623477 container start 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:25:37 np0005603663 cranky_liskov[240334]: 167 167
Jan 31 03:25:37 np0005603663 systemd[1]: libpod-1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6.scope: Deactivated successfully.
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.705475118 +0000 UTC m=+0.283834317 container attach 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.706170798 +0000 UTC m=+0.284529987 container died 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:37 np0005603663 systemd[1]: var-lib-containers-storage-overlay-76f5a5c166f4076e067f27147d417eeb30d2677ee0756cb05a927a37317153d2-merged.mount: Deactivated successfully.
Jan 31 03:25:37 np0005603663 podman[240317]: 2026-01-31 08:25:37.937106634 +0000 UTC m=+0.515465793 container remove 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:25:37 np0005603663 systemd[1]: libpod-conmon-1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6.scope: Deactivated successfully.
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.110501884 +0000 UTC m=+0.050991016 container create 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:38 np0005603663 systemd[1]: Started libpod-conmon-79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24.scope.
Jan 31 03:25:38 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:25:38 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:38 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:38 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:38 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.084603409 +0000 UTC m=+0.025092521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.199055803 +0000 UTC m=+0.139544915 container init 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.205932093 +0000 UTC m=+0.146421185 container start 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.23295652 +0000 UTC m=+0.173445612 container attach 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:38 np0005603663 busy_newton[240377]: {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:    "0": [
Jan 31 03:25:38 np0005603663 busy_newton[240377]:        {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "devices": [
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "/dev/loop3"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            ],
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_name": "ceph_lv0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_size": "21470642176",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "name": "ceph_lv0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "tags": {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cluster_name": "ceph",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.crush_device_class": "",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.encrypted": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.objectstore": "bluestore",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osd_id": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.type": "block",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.vdo": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.with_tpm": "0"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            },
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "type": "block",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "vg_name": "ceph_vg0"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:        }
Jan 31 03:25:38 np0005603663 busy_newton[240377]:    ],
Jan 31 03:25:38 np0005603663 busy_newton[240377]:    "1": [
Jan 31 03:25:38 np0005603663 busy_newton[240377]:        {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "devices": [
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "/dev/loop4"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            ],
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_name": "ceph_lv1",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_size": "21470642176",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "name": "ceph_lv1",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "tags": {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cluster_name": "ceph",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.crush_device_class": "",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.encrypted": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.objectstore": "bluestore",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osd_id": "1",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.type": "block",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.vdo": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.with_tpm": "0"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            },
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "type": "block",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "vg_name": "ceph_vg1"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:        }
Jan 31 03:25:38 np0005603663 busy_newton[240377]:    ],
Jan 31 03:25:38 np0005603663 busy_newton[240377]:    "2": [
Jan 31 03:25:38 np0005603663 busy_newton[240377]:        {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "devices": [
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "/dev/loop5"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            ],
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_name": "ceph_lv2",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_size": "21470642176",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "name": "ceph_lv2",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "tags": {
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.cluster_name": "ceph",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.crush_device_class": "",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.encrypted": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.objectstore": "bluestore",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osd_id": "2",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.type": "block",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.vdo": "0",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:                "ceph.with_tpm": "0"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            },
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "type": "block",
Jan 31 03:25:38 np0005603663 busy_newton[240377]:            "vg_name": "ceph_vg2"
Jan 31 03:25:38 np0005603663 busy_newton[240377]:        }
Jan 31 03:25:38 np0005603663 busy_newton[240377]:    ]
Jan 31 03:25:38 np0005603663 busy_newton[240377]: }
Jan 31 03:25:38 np0005603663 systemd[1]: libpod-79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24.scope: Deactivated successfully.
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.497622598 +0000 UTC m=+0.438111700 container died 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:25:38 np0005603663 systemd[1]: var-lib-containers-storage-overlay-54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad-merged.mount: Deactivated successfully.
Jan 31 03:25:38 np0005603663 podman[240360]: 2026-01-31 08:25:38.631032584 +0000 UTC m=+0.571521686 container remove 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:25:38 np0005603663 systemd[1]: libpod-conmon-79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24.scope: Deactivated successfully.
Jan 31 03:25:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:38 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.060520532 +0000 UTC m=+0.031872939 container create c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:25:39 np0005603663 systemd[1]: Started libpod-conmon-c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213.scope.
Jan 31 03:25:39 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.045134264 +0000 UTC m=+0.016486681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.198864101 +0000 UTC m=+0.170216618 container init c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.203921728 +0000 UTC m=+0.175274135 container start c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:25:39 np0005603663 optimistic_greider[240478]: 167 167
Jan 31 03:25:39 np0005603663 systemd[1]: libpod-c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213.scope: Deactivated successfully.
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.266492201 +0000 UTC m=+0.237844648 container attach c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.267036726 +0000 UTC m=+0.238389233 container died c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:25:39 np0005603663 systemd[1]: var-lib-containers-storage-overlay-99e2320e65e38026d061c2086bd5681e3de533c320752845f610136230f82318-merged.mount: Deactivated successfully.
Jan 31 03:25:39 np0005603663 podman[240462]: 2026-01-31 08:25:39.430616371 +0000 UTC m=+0.401968808 container remove c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:39 np0005603663 systemd[1]: libpod-conmon-c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213.scope: Deactivated successfully.
Jan 31 03:25:39 np0005603663 podman[240503]: 2026-01-31 08:25:39.616719481 +0000 UTC m=+0.089768146 container create 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:25:39 np0005603663 podman[240503]: 2026-01-31 08:25:39.570401972 +0000 UTC m=+0.043450657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:25:39 np0005603663 systemd[1]: Started libpod-conmon-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope.
Jan 31 03:25:39 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:25:39 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:39 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:39 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:39 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:39 np0005603663 podman[240503]: 2026-01-31 08:25:39.794600611 +0000 UTC m=+0.267649376 container init 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:39 np0005603663 podman[240503]: 2026-01-31 08:25:39.803707845 +0000 UTC m=+0.276756520 container start 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:25:39 np0005603663 podman[240503]: 2026-01-31 08:25:39.903820671 +0000 UTC m=+0.376869456 container attach 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:25:40 np0005603663 lvm[240598]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:25:40 np0005603663 lvm[240596]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:25:40 np0005603663 lvm[240596]: VG ceph_vg0 finished
Jan 31 03:25:40 np0005603663 lvm[240598]: VG ceph_vg1 finished
Jan 31 03:25:40 np0005603663 lvm[240600]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:25:40 np0005603663 lvm[240600]: VG ceph_vg2 finished
Jan 31 03:25:40 np0005603663 pedantic_germain[240519]: {}
Jan 31 03:25:40 np0005603663 systemd[1]: libpod-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope: Deactivated successfully.
Jan 31 03:25:40 np0005603663 systemd[1]: libpod-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope: Consumed 1.113s CPU time.
Jan 31 03:25:40 np0005603663 podman[240503]: 2026-01-31 08:25:40.610377029 +0000 UTC m=+1.083425714 container died 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:25:40 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc-merged.mount: Deactivated successfully.
Jan 31 03:25:40 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:41 np0005603663 podman[240503]: 2026-01-31 08:25:41.240950043 +0000 UTC m=+1.713998718 container remove 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:41 np0005603663 systemd[1]: libpod-conmon-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope: Deactivated successfully.
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086333539' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 03:25:41 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 03:25:41 np0005603663 ceph-mgr[75519]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 03:25:41 np0005603663 ceph-mgr[75519]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:25:41 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:25:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:25:42 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:25:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:25:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:44 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:46 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:48 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:50 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:52 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:54 np0005603663 podman[240641]: 2026-01-31 08:25:54.169964235 +0000 UTC m=+0.055900259 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 03:25:54 np0005603663 podman[240640]: 2026-01-31 08:25:54.201184534 +0000 UTC m=+0.091810224 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Jan 31 03:25:54 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:56 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:25:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:25:58 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.164 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.204 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.205 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.205 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.242 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.242 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.242 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.317 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:26:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866780905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:26:09 np0005603663 nova_compute[238824]: 2026-01-31 08:26:09.882 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.040 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.041 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.041 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.042 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.227 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.243 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:26:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511150945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.830 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.835 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.871 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.873 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.873 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.970 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.970 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:10 np0005603663 nova_compute[238824]: 2026-01-31 08:26:10.971 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:26:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:26:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3308 writes, 14K keys, 3308 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3308 writes, 3308 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1274 writes, 5564 keys, 1274 commit groups, 1.0 writes per commit group, ingest: 8.57 MB, 0.01 MB/s#012Interval WAL: 1274 writes, 1274 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     51.5      0.29              0.03         6    0.048       0      0       0.0       0.0#012  L6      1/0    7.33 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     72.7     60.0      0.60              0.11         5    0.120     19K   2202       0.0       0.0#012 Sum      1/0    7.33 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     49.0     57.2      0.89              0.14        11    0.081     19K   2202       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     36.8     37.6      0.74              0.09         6    0.123     12K   1463       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     72.7     60.0      0.60              0.11         5    0.120     19K   2202       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     52.1      0.29              0.03         5    0.057       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.9 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 308.00 MB usage: 1.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(88,1.43 MB,0.464848%) FilterBlock(12,63.17 KB,0.0200296%) IndexBlock(12,130.39 KB,0.0413424%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:26:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.483 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:5f:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:1b:f0:08:31:5c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:26:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.485 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:26:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.486 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:26:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284052896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:26:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:26:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284052896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:26:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:25 np0005603663 podman[240730]: 2026-01-31 08:26:25.187338538 +0000 UTC m=+0.067508207 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:26:25 np0005603663 podman[240729]: 2026-01-31 08:26:25.208237557 +0000 UTC m=+0.092205816 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:26:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:26:31
Jan 31 03:26:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:26:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:26:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms']
Jan 31 03:26:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:26:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:26:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:26:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:26:42 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.631983512 +0000 UTC m=+0.049211674 container create eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:26:42 np0005603663 systemd[1]: Started libpod-conmon-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope.
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.604354317 +0000 UTC m=+0.021582479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:26:42 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.763569704 +0000 UTC m=+0.180797786 container init eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.773488083 +0000 UTC m=+0.190716145 container start eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 31 03:26:42 np0005603663 systemd[1]: libpod-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope: Deactivated successfully.
Jan 31 03:26:42 np0005603663 admiring_mendeleev[240933]: 167 167
Jan 31 03:26:42 np0005603663 conmon[240933]: conmon eff07b4be281745ee49a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope/container/memory.events
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.793790304 +0000 UTC m=+0.211018416 container attach eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.79606083 +0000 UTC m=+0.213288912 container died eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:26:42 np0005603663 systemd[1]: var-lib-containers-storage-overlay-890727d47b249c47f4296fb55192a1ce188aab0af839f58dc49ab94004668e4f-merged.mount: Deactivated successfully.
Jan 31 03:26:42 np0005603663 podman[240917]: 2026-01-31 08:26:42.970540282 +0000 UTC m=+0.387768394 container remove eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:26:43 np0005603663 systemd[1]: libpod-conmon-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope: Deactivated successfully.
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:43 np0005603663 podman[240958]: 2026-01-31 08:26:43.194403311 +0000 UTC m=+0.069309329 container create ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:26:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:26:43 np0005603663 systemd[1]: Started libpod-conmon-ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77.scope.
Jan 31 03:26:43 np0005603663 podman[240958]: 2026-01-31 08:26:43.167027614 +0000 UTC m=+0.041933732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:26:43 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:26:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:43 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:43 np0005603663 podman[240958]: 2026-01-31 08:26:43.318052233 +0000 UTC m=+0.192958261 container init ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:26:43 np0005603663 podman[240958]: 2026-01-31 08:26:43.329538537 +0000 UTC m=+0.204444595 container start ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:26:43 np0005603663 podman[240958]: 2026-01-31 08:26:43.333599365 +0000 UTC m=+0.208505423 container attach ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:26:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:43 np0005603663 bold_carver[240975]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:26:43 np0005603663 bold_carver[240975]: --> All data devices are unavailable
Jan 31 03:26:43 np0005603663 systemd[1]: libpod-ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77.scope: Deactivated successfully.
Jan 31 03:26:43 np0005603663 podman[240958]: 2026-01-31 08:26:43.798617089 +0000 UTC m=+0.673523137 container died ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:26:43 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 03:26:43 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:43.971143) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:26:43 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 03:26:43 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848003971198, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1843, "num_deletes": 505, "total_data_size": 2580126, "memory_usage": 2622608, "flush_reason": "Manual Compaction"}
Jan 31 03:26:43 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004056642, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2544428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13436, "largest_seqno": 15278, "table_properties": {"data_size": 2536397, "index_size": 4397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18805, "raw_average_key_size": 18, "raw_value_size": 2518403, "raw_average_value_size": 2478, "num_data_blocks": 200, "num_entries": 1016, "num_filter_entries": 1016, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847833, "oldest_key_time": 1769847833, "file_creation_time": 1769848003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 85616 microseconds, and 5938 cpu microseconds.
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.056747) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2544428 bytes OK
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.056786) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.096859) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.096928) EVENT_LOG_v1 {"time_micros": 1769848004096918, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.096963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2571184, prev total WAL file size 2571184, number of live WAL files 2.
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.097835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2484KB)], [32(7506KB)]
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004097909, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10231578, "oldest_snapshot_seqno": -1}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3987 keys, 8178838 bytes, temperature: kUnknown
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004477800, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8178838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8149457, "index_size": 18327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97350, "raw_average_key_size": 24, "raw_value_size": 8074618, "raw_average_value_size": 2025, "num_data_blocks": 776, "num_entries": 3987, "num_filter_entries": 3987, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:26:44 np0005603663 systemd[1]: var-lib-containers-storage-overlay-cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763-merged.mount: Deactivated successfully.
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.478143) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8178838 bytes
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.548924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.9 rd, 21.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.3 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 5010, records dropped: 1023 output_compression: NoCompression
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.548969) EVENT_LOG_v1 {"time_micros": 1769848004548952, "job": 14, "event": "compaction_finished", "compaction_time_micros": 380009, "compaction_time_cpu_micros": 16096, "output_level": 6, "num_output_files": 1, "total_output_size": 8178838, "num_input_records": 5010, "num_output_records": 3987, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004549367, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004550247, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.097698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:44 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:44 np0005603663 podman[240958]: 2026-01-31 08:26:44.988202943 +0000 UTC m=+1.863108961 container remove ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:26:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:45 np0005603663 systemd[1]: libpod-conmon-ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77.scope: Deactivated successfully.
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.394876537 +0000 UTC m=+0.058958748 container create 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:26:45 np0005603663 systemd[1]: Started libpod-conmon-23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189.scope.
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.368058326 +0000 UTC m=+0.032140577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:26:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.491692496 +0000 UTC m=+0.155774757 container init 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.498541836 +0000 UTC m=+0.162624017 container start 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.50314343 +0000 UTC m=+0.167225701 container attach 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:26:45 np0005603663 sharp_hertz[241083]: 167 167
Jan 31 03:26:45 np0005603663 systemd[1]: libpod-23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189.scope: Deactivated successfully.
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.504204831 +0000 UTC m=+0.168287022 container died 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:26:45 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fc7ddb6011d7a64d2a2424c5e40c591cece94db472c0572d40063e2419c79990-merged.mount: Deactivated successfully.
Jan 31 03:26:45 np0005603663 podman[241067]: 2026-01-31 08:26:45.701890488 +0000 UTC m=+0.365972689 container remove 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:26:45 np0005603663 systemd[1]: libpod-conmon-23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189.scope: Deactivated successfully.
Jan 31 03:26:45 np0005603663 podman[241107]: 2026-01-31 08:26:45.867040078 +0000 UTC m=+0.034832306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:26:46 np0005603663 podman[241107]: 2026-01-31 08:26:46.100786626 +0000 UTC m=+0.268578844 container create 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:26:46 np0005603663 systemd[1]: Started libpod-conmon-3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339.scope.
Jan 31 03:26:46 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:26:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:46 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:46 np0005603663 podman[241107]: 2026-01-31 08:26:46.557545008 +0000 UTC m=+0.725337196 container init 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:26:46 np0005603663 podman[241107]: 2026-01-31 08:26:46.563924944 +0000 UTC m=+0.731717122 container start 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:26:46 np0005603663 podman[241107]: 2026-01-31 08:26:46.568106106 +0000 UTC m=+0.735898314 container attach 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:26:46 np0005603663 fervent_curran[241124]: {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:    "0": [
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:        {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "devices": [
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "/dev/loop3"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            ],
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_name": "ceph_lv0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_size": "21470642176",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "name": "ceph_lv0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "tags": {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cluster_name": "ceph",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.crush_device_class": "",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.encrypted": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.objectstore": "bluestore",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osd_id": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.type": "block",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.vdo": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.with_tpm": "0"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            },
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "type": "block",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "vg_name": "ceph_vg0"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:        }
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:    ],
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:    "1": [
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:        {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "devices": [
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "/dev/loop4"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            ],
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_name": "ceph_lv1",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_size": "21470642176",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "name": "ceph_lv1",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "tags": {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cluster_name": "ceph",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.crush_device_class": "",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.encrypted": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.objectstore": "bluestore",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osd_id": "1",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.type": "block",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.vdo": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.with_tpm": "0"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            },
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "type": "block",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "vg_name": "ceph_vg1"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:        }
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:    ],
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:    "2": [
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:        {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "devices": [
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "/dev/loop5"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            ],
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_name": "ceph_lv2",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_size": "21470642176",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "name": "ceph_lv2",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "tags": {
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.cluster_name": "ceph",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.crush_device_class": "",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.encrypted": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.objectstore": "bluestore",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osd_id": "2",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.type": "block",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.vdo": "0",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:                "ceph.with_tpm": "0"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            },
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "type": "block",
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:            "vg_name": "ceph_vg2"
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:        }
Jan 31 03:26:46 np0005603663 fervent_curran[241124]:    ]
Jan 31 03:26:46 np0005603663 fervent_curran[241124]: }
Jan 31 03:26:46 np0005603663 systemd[1]: libpod-3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339.scope: Deactivated successfully.
Jan 31 03:26:46 np0005603663 podman[241133]: 2026-01-31 08:26:46.879098663 +0000 UTC m=+0.033123326 container died 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:26:46 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f-merged.mount: Deactivated successfully.
Jan 31 03:26:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:47 np0005603663 podman[241133]: 2026-01-31 08:26:47.105558548 +0000 UTC m=+0.259583131 container remove 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:26:47 np0005603663 systemd[1]: libpod-conmon-3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339.scope: Deactivated successfully.
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.579183782 +0000 UTC m=+0.047569616 container create 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:26:47 np0005603663 systemd[1]: Started libpod-conmon-2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6.scope.
Jan 31 03:26:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.558109448 +0000 UTC m=+0.026495272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.658651317 +0000 UTC m=+0.127037131 container init 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.664050474 +0000 UTC m=+0.132436268 container start 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:26:47 np0005603663 gallant_payne[241228]: 167 167
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.669439581 +0000 UTC m=+0.137825395 container attach 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:26:47 np0005603663 systemd[1]: libpod-2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6.scope: Deactivated successfully.
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.670232664 +0000 UTC m=+0.138618488 container died 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:26:47 np0005603663 systemd[1]: var-lib-containers-storage-overlay-48ea4443c358b096939367753becb02523dad83bc66980da33e9e81743e018a5-merged.mount: Deactivated successfully.
Jan 31 03:26:47 np0005603663 podman[241211]: 2026-01-31 08:26:47.742285672 +0000 UTC m=+0.210671476 container remove 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:26:47 np0005603663 systemd[1]: libpod-conmon-2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6.scope: Deactivated successfully.
Jan 31 03:26:47 np0005603663 podman[241252]: 2026-01-31 08:26:47.889222111 +0000 UTC m=+0.051120480 container create 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:26:47 np0005603663 systemd[1]: Started libpod-conmon-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope.
Jan 31 03:26:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:26:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:47 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:47 np0005603663 podman[241252]: 2026-01-31 08:26:47.858199637 +0000 UTC m=+0.020098046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:26:47 np0005603663 podman[241252]: 2026-01-31 08:26:47.970302622 +0000 UTC m=+0.132201031 container init 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:26:47 np0005603663 podman[241252]: 2026-01-31 08:26:47.976975157 +0000 UTC m=+0.138873526 container start 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:26:47 np0005603663 podman[241252]: 2026-01-31 08:26:47.984489485 +0000 UTC m=+0.146387874 container attach 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:26:48 np0005603663 lvm[241348]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:26:48 np0005603663 lvm[241348]: VG ceph_vg1 finished
Jan 31 03:26:48 np0005603663 lvm[241347]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:26:48 np0005603663 lvm[241347]: VG ceph_vg0 finished
Jan 31 03:26:48 np0005603663 lvm[241350]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:26:48 np0005603663 lvm[241350]: VG ceph_vg2 finished
Jan 31 03:26:48 np0005603663 interesting_moore[241269]: {}
Jan 31 03:26:48 np0005603663 systemd[1]: libpod-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope: Deactivated successfully.
Jan 31 03:26:48 np0005603663 systemd[1]: libpod-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope: Consumed 1.056s CPU time.
Jan 31 03:26:48 np0005603663 podman[241252]: 2026-01-31 08:26:48.710222892 +0000 UTC m=+0.872121271 container died 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:26:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da-merged.mount: Deactivated successfully.
Jan 31 03:26:48 np0005603663 podman[241252]: 2026-01-31 08:26:48.761344431 +0000 UTC m=+0.923242800 container remove 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:26:48 np0005603663 systemd[1]: libpod-conmon-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope: Deactivated successfully.
Jan 31 03:26:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:26:48 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:26:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:26:48 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:26:48 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:49 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:26:49 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:26:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:53 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:56 np0005603663 podman[241398]: 2026-01-31 08:26:56.181063651 +0000 UTC m=+0.056042623 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:26:56 np0005603663 podman[241392]: 2026-01-31 08:26:56.206930425 +0000 UTC m=+0.096039178 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:26:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:26:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:26:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:03 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:07 np0005603663 nova_compute[238824]: 2026-01-31 08:27:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.416 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.417 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.417 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.418 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.418 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.507 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.508 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.508 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.508 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:27:08 np0005603663 nova_compute[238824]: 2026-01-31 08:27:08.509 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:08 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:27:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837891686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.102 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.236 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.238 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.238 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.238 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.453 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.454 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.473 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:27:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720361535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:27:09 np0005603663 nova_compute[238824]: 2026-01-31 08:27:09.997 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:10 np0005603663 nova_compute[238824]: 2026-01-31 08:27:10.004 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:10 np0005603663 nova_compute[238824]: 2026-01-31 08:27:10.081 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:27:10 np0005603663 nova_compute[238824]: 2026-01-31 08:27:10.085 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:27:10 np0005603663 nova_compute[238824]: 2026-01-31 08:27:10.086 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:12 np0005603663 nova_compute[238824]: 2026-01-31 08:27:12.009 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:12 np0005603663 nova_compute[238824]: 2026-01-31 08:27:12.010 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:12 np0005603663 nova_compute[238824]: 2026-01-31 08:27:12.010 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:12 np0005603663 nova_compute[238824]: 2026-01-31 08:27:12.011 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:27:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:27:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5823 writes, 24K keys, 5823 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5823 writes, 961 syncs, 6.06 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 31 03:27:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:27:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:27:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:27:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:27:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296449466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:27:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:27:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296449466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:27:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:27:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 7056 writes, 29K keys, 7056 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7056 writes, 1347 syncs, 5.24 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 31 03:27:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:27 np0005603663 podman[241481]: 2026-01-31 08:27:27.176881606 +0000 UTC m=+0.064269410 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:27:27 np0005603663 podman[241480]: 2026-01-31 08:27:27.185425378 +0000 UTC m=+0.079756108 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 03:27:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:27:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.8 total, 600.0 interval#012Cumulative writes: 5591 writes, 24K keys, 5591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5591 writes, 826 syncs, 6.77 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 227 writes, 342 keys, 227 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 227 writes, 113 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Jan 31 03:27:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:27:31
Jan 31 03:27:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:27:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:27:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root']
Jan 31 03:27:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:27:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:27:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:27:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:27:49 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:27:49 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:27:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:27:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:27:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:51 np0005603663 podman[241739]: 2026-01-31 08:27:51.295028118 +0000 UTC m=+0.032392959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:27:51 np0005603663 podman[241739]: 2026-01-31 08:27:51.689507128 +0000 UTC m=+0.426871909 container create 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:27:51 np0005603663 systemd[1]: Started libpod-conmon-912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6.scope.
Jan 31 03:27:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:27:52 np0005603663 podman[241739]: 2026-01-31 08:27:52.482085988 +0000 UTC m=+1.219450869 container init 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:27:52 np0005603663 podman[241739]: 2026-01-31 08:27:52.492219145 +0000 UTC m=+1.229583936 container start 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:27:52 np0005603663 focused_haslett[241756]: 167 167
Jan 31 03:27:52 np0005603663 systemd[1]: libpod-912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6.scope: Deactivated successfully.
Jan 31 03:27:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:52 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:27:52 np0005603663 podman[241739]: 2026-01-31 08:27:52.708357655 +0000 UTC m=+1.445722466 container attach 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:27:52 np0005603663 podman[241739]: 2026-01-31 08:27:52.708726656 +0000 UTC m=+1.446091477 container died 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:27:53 np0005603663 systemd[1]: var-lib-containers-storage-overlay-01880d2c3c11a601890f059a0790628f8b14800ebf57072c8f0225abcc57be6b-merged.mount: Deactivated successfully.
Jan 31 03:27:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:53 np0005603663 podman[241739]: 2026-01-31 08:27:53.441552226 +0000 UTC m=+2.178917017 container remove 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:27:53 np0005603663 systemd[1]: libpod-conmon-912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6.scope: Deactivated successfully.
Jan 31 03:27:53 np0005603663 podman[241781]: 2026-01-31 08:27:53.630321141 +0000 UTC m=+0.075517659 container create d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 03:27:53 np0005603663 podman[241781]: 2026-01-31 08:27:53.588429195 +0000 UTC m=+0.033625703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:27:53 np0005603663 systemd[1]: Started libpod-conmon-d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26.scope.
Jan 31 03:27:53 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:27:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:53 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:53 np0005603663 podman[241781]: 2026-01-31 08:27:53.771448987 +0000 UTC m=+0.216645485 container init d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:53 np0005603663 podman[241781]: 2026-01-31 08:27:53.777991952 +0000 UTC m=+0.223188470 container start d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:27:53 np0005603663 podman[241781]: 2026-01-31 08:27:53.801768496 +0000 UTC m=+0.246964974 container attach d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:54 np0005603663 sad_napier[241797]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:27:54 np0005603663 sad_napier[241797]: --> All data devices are unavailable
Jan 31 03:27:54 np0005603663 systemd[1]: libpod-d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26.scope: Deactivated successfully.
Jan 31 03:27:54 np0005603663 podman[241781]: 2026-01-31 08:27:54.246866079 +0000 UTC m=+0.692062557 container died d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:27:54 np0005603663 systemd[1]: var-lib-containers-storage-overlay-47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f-merged.mount: Deactivated successfully.
Jan 31 03:27:54 np0005603663 podman[241781]: 2026-01-31 08:27:54.765923245 +0000 UTC m=+1.211119763 container remove d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:27:54 np0005603663 systemd[1]: libpod-conmon-d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26.scope: Deactivated successfully.
Jan 31 03:27:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.267847317 +0000 UTC m=+0.047200918 container create 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:27:55 np0005603663 systemd[1]: Started libpod-conmon-5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991.scope.
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.247845501 +0000 UTC m=+0.027199132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:27:55 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.400843963 +0000 UTC m=+0.180197614 container init 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.407210163 +0000 UTC m=+0.186563784 container start 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:27:55 np0005603663 frosty_banzai[241909]: 167 167
Jan 31 03:27:55 np0005603663 systemd[1]: libpod-5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991.scope: Deactivated successfully.
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.414907551 +0000 UTC m=+0.194261172 container attach 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.415530358 +0000 UTC m=+0.194883969 container died 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:27:55 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f464a803a5d9c90002fe68286d479fe653bf847e2da12b998638244cb0288b16-merged.mount: Deactivated successfully.
Jan 31 03:27:55 np0005603663 podman[241893]: 2026-01-31 08:27:55.513726789 +0000 UTC m=+0.293080440 container remove 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:27:55 np0005603663 systemd[1]: libpod-conmon-5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991.scope: Deactivated successfully.
Jan 31 03:27:55 np0005603663 podman[241934]: 2026-01-31 08:27:55.700036874 +0000 UTC m=+0.081090887 container create b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:27:55 np0005603663 podman[241934]: 2026-01-31 08:27:55.643470063 +0000 UTC m=+0.024524126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:27:55 np0005603663 systemd[1]: Started libpod-conmon-b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1.scope.
Jan 31 03:27:55 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:27:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:55 np0005603663 podman[241934]: 2026-01-31 08:27:55.843037672 +0000 UTC m=+0.224091675 container init b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:27:55 np0005603663 podman[241934]: 2026-01-31 08:27:55.849619359 +0000 UTC m=+0.230673322 container start b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:27:55 np0005603663 podman[241934]: 2026-01-31 08:27:55.877808037 +0000 UTC m=+0.258862100 container attach b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]: {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:    "0": [
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:        {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "devices": [
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "/dev/loop3"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            ],
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_name": "ceph_lv0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_size": "21470642176",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "name": "ceph_lv0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "tags": {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cluster_name": "ceph",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.crush_device_class": "",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.encrypted": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.objectstore": "bluestore",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osd_id": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.type": "block",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.vdo": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.with_tpm": "0"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            },
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "type": "block",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "vg_name": "ceph_vg0"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:        }
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:    ],
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:    "1": [
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:        {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "devices": [
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "/dev/loop4"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            ],
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_name": "ceph_lv1",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_size": "21470642176",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "name": "ceph_lv1",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "tags": {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cluster_name": "ceph",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.crush_device_class": "",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.encrypted": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.objectstore": "bluestore",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osd_id": "1",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.type": "block",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.vdo": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.with_tpm": "0"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            },
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "type": "block",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "vg_name": "ceph_vg1"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:        }
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:    ],
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:    "2": [
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:        {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "devices": [
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "/dev/loop5"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            ],
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_name": "ceph_lv2",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_size": "21470642176",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "name": "ceph_lv2",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "tags": {
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.cluster_name": "ceph",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.crush_device_class": "",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.encrypted": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.objectstore": "bluestore",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osd_id": "2",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.type": "block",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.vdo": "0",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:                "ceph.with_tpm": "0"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            },
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "type": "block",
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:            "vg_name": "ceph_vg2"
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:        }
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]:    ]
Jan 31 03:27:56 np0005603663 wonderful_hawking[241951]: }
Jan 31 03:27:56 np0005603663 systemd[1]: libpod-b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1.scope: Deactivated successfully.
Jan 31 03:27:56 np0005603663 podman[241934]: 2026-01-31 08:27:56.16222391 +0000 UTC m=+0.543277903 container died b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:27:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d-merged.mount: Deactivated successfully.
Jan 31 03:27:56 np0005603663 podman[241934]: 2026-01-31 08:27:56.232092338 +0000 UTC m=+0.613146311 container remove b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:27:56 np0005603663 systemd[1]: libpod-conmon-b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1.scope: Deactivated successfully.
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.637027424 +0000 UTC m=+0.024716111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.796949982 +0000 UTC m=+0.184638679 container create 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:27:56 np0005603663 systemd[1]: Started libpod-conmon-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope.
Jan 31 03:27:56 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.926162421 +0000 UTC m=+0.313851078 container init 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.935691491 +0000 UTC m=+0.323380188 container start 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:27:56 np0005603663 systemd[1]: libpod-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope: Deactivated successfully.
Jan 31 03:27:56 np0005603663 zealous_hypatia[242055]: 167 167
Jan 31 03:27:56 np0005603663 conmon[242055]: conmon 5305134cc25a55132d75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope/container/memory.events
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.944577063 +0000 UTC m=+0.332265760 container attach 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.945562771 +0000 UTC m=+0.333251428 container died 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:27:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-40158207d6c3cef511d0686ce6bc117049c878d57bf9f6c2f3a2c0fe9636a0fe-merged.mount: Deactivated successfully.
Jan 31 03:27:56 np0005603663 podman[242039]: 2026-01-31 08:27:56.998297934 +0000 UTC m=+0.385986591 container remove 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 03:27:57 np0005603663 systemd[1]: libpod-conmon-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope: Deactivated successfully.
Jan 31 03:27:57 np0005603663 podman[242081]: 2026-01-31 08:27:57.138600367 +0000 UTC m=+0.043880494 container create f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:27:57 np0005603663 systemd[1]: Started libpod-conmon-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope.
Jan 31 03:27:57 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:27:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:57 np0005603663 podman[242081]: 2026-01-31 08:27:57.118886588 +0000 UTC m=+0.024166735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:27:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:57 np0005603663 podman[242081]: 2026-01-31 08:27:57.235951373 +0000 UTC m=+0.141231550 container init f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:27:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:57 np0005603663 podman[242081]: 2026-01-31 08:27:57.243808996 +0000 UTC m=+0.149089133 container start f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:57 np0005603663 podman[242081]: 2026-01-31 08:27:57.258126811 +0000 UTC m=+0.163406948 container attach f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:27:57 np0005603663 podman[242100]: 2026-01-31 08:27:57.275493763 +0000 UTC m=+0.069820878 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:27:57 np0005603663 podman[242102]: 2026-01-31 08:27:57.329938954 +0000 UTC m=+0.115079959 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:27:57 np0005603663 lvm[242221]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:27:57 np0005603663 lvm[242222]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:27:57 np0005603663 lvm[242222]: VG ceph_vg1 finished
Jan 31 03:27:57 np0005603663 lvm[242221]: VG ceph_vg0 finished
Jan 31 03:27:57 np0005603663 lvm[242224]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:27:57 np0005603663 lvm[242224]: VG ceph_vg2 finished
Jan 31 03:27:58 np0005603663 competent_jemison[242098]: {}
Jan 31 03:27:58 np0005603663 systemd[1]: libpod-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope: Deactivated successfully.
Jan 31 03:27:58 np0005603663 systemd[1]: libpod-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope: Consumed 1.210s CPU time.
Jan 31 03:27:58 np0005603663 podman[242227]: 2026-01-31 08:27:58.082429801 +0000 UTC m=+0.022535759 container died f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:58 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680-merged.mount: Deactivated successfully.
Jan 31 03:27:58 np0005603663 podman[242227]: 2026-01-31 08:27:58.139868588 +0000 UTC m=+0.079974566 container remove f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:27:58 np0005603663 systemd[1]: libpod-conmon-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope: Deactivated successfully.
Jan 31 03:27:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:27:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:27:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:27:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:27:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:27:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:28:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:07 np0005603663 nova_compute[238824]: 2026-01-31 08:28:07.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:08 np0005603663 nova_compute[238824]: 2026-01-31 08:28:08.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:08 np0005603663 nova_compute[238824]: 2026-01-31 08:28:08.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.335 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.350 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.350 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.350 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.361 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.362 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.362 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.385 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.386 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.386 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.387 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.387 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:28:09 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820916702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:28:09 np0005603663 nova_compute[238824]: 2026-01-31 08:28:09.960 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.109 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.109 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.110 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.110 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.236 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.237 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.256 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:28:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3075693408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.810 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.817 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.833 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.834 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:28:10 np0005603663 nova_compute[238824]: 2026-01-31 08:28:10.834 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:12 np0005603663 nova_compute[238824]: 2026-01-31 08:28:12.834 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:12 np0005603663 nova_compute[238824]: 2026-01-31 08:28:12.835 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:13 np0005603663 nova_compute[238824]: 2026-01-31 08:28:13.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:13 np0005603663 nova_compute[238824]: 2026-01-31 08:28:13.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:28:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:28:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:28:17.887 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:28:17.887 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:28:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364113193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:28:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:28:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364113193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:28:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:28 np0005603663 podman[242313]: 2026-01-31 08:28:28.173217768 +0000 UTC m=+0.052894268 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:28:28 np0005603663 podman[242312]: 2026-01-31 08:28:28.190894269 +0000 UTC m=+0.070480247 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:28:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 03:28:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:28:31
Jan 31 03:28:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:28:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:28:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'backups', 'images']
Jan 31 03:28:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:28:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:28:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 03:28:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 31 03:28:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.527924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119527990, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 251, "total_data_size": 1798018, "memory_usage": 1830304, "flush_reason": "Manual Compaction"}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119595003, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1759698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15279, "largest_seqno": 16437, "table_properties": {"data_size": 1754143, "index_size": 2950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11630, "raw_average_key_size": 19, "raw_value_size": 1743042, "raw_average_value_size": 2924, "num_data_blocks": 135, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848004, "oldest_key_time": 1769848004, "file_creation_time": 1769848119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 67147 microseconds, and 5569 cpu microseconds.
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.595073) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1759698 bytes OK
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.595098) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614126) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614171) EVENT_LOG_v1 {"time_micros": 1769848119614160, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614201) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1792702, prev total WAL file size 1793985, number of live WAL files 2.
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.615159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1718KB)], [35(7987KB)]
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119615234, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9938536, "oldest_snapshot_seqno": -1}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4069 keys, 8126965 bytes, temperature: kUnknown
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119799495, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8126965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8097195, "index_size": 18524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99553, "raw_average_key_size": 24, "raw_value_size": 8020972, "raw_average_value_size": 1971, "num_data_blocks": 782, "num_entries": 4069, "num_filter_entries": 4069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.799798) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8126965 bytes
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.812863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.9 rd, 44.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.8 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 4583, records dropped: 514 output_compression: NoCompression
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.812898) EVENT_LOG_v1 {"time_micros": 1769848119812883, "job": 16, "event": "compaction_finished", "compaction_time_micros": 184344, "compaction_time_cpu_micros": 13961, "output_level": 6, "num_output_files": 1, "total_output_size": 8126965, "num_input_records": 4583, "num_output_records": 4069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119813469, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119815011, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:39 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:28:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:28:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 31 03:28:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 31 03:28:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 03:28:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 03:28:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:58 np0005603663 podman[242381]: 2026-01-31 08:28:58.394675911 +0000 UTC m=+0.038185592 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:28:58 np0005603663 podman[242380]: 2026-01-31 08:28:58.425926136 +0000 UTC m=+0.070838517 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 03:28:58 np0005603663 podman[242494]: 2026-01-31 08:28:58.851339101 +0000 UTC m=+0.142492785 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:58 np0005603663 podman[242494]: 2026-01-31 08:28:58.938004795 +0000 UTC m=+0.229158419 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 31 03:28:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:28:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:28:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:28:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:28:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:28:59 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:29:00 np0005603663 podman[242822]: 2026-01-31 08:29:00.792630408 +0000 UTC m=+0.025302908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:00 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:29:00 np0005603663 podman[242822]: 2026-01-31 08:29:00.94845855 +0000 UTC m=+0.181130980 container create 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:29:01 np0005603663 systemd[1]: Started libpod-conmon-1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298.scope.
Jan 31 03:29:01 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:29:01 np0005603663 podman[242822]: 2026-01-31 08:29:01.058037673 +0000 UTC m=+0.290710103 container init 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:29:01 np0005603663 podman[242822]: 2026-01-31 08:29:01.062914881 +0000 UTC m=+0.295587311 container start 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:29:01 np0005603663 sleepy_curran[242839]: 167 167
Jan 31 03:29:01 np0005603663 systemd[1]: libpod-1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298.scope: Deactivated successfully.
Jan 31 03:29:01 np0005603663 podman[242822]: 2026-01-31 08:29:01.066731969 +0000 UTC m=+0.299404399 container attach 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:01 np0005603663 podman[242822]: 2026-01-31 08:29:01.067171392 +0000 UTC m=+0.299843852 container died 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:29:01 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b0e21d33da33a8ab516b54ea6a40aedb2c91b2ae0298f4140abe886822a404bb-merged.mount: Deactivated successfully.
Jan 31 03:29:01 np0005603663 podman[242822]: 2026-01-31 08:29:01.109110079 +0000 UTC m=+0.341782509 container remove 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:29:01 np0005603663 systemd[1]: libpod-conmon-1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298.scope: Deactivated successfully.
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.253915909 +0000 UTC m=+0.049337758 container create 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:29:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:01 np0005603663 systemd[1]: Started libpod-conmon-97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5.scope.
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.225413672 +0000 UTC m=+0.020835561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:29:01 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:29:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:01 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.3450461 +0000 UTC m=+0.140467979 container init 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.350488574 +0000 UTC m=+0.145910423 container start 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.355015082 +0000 UTC m=+0.150436961 container attach 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:29:01 np0005603663 romantic_rosalind[242879]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:29:01 np0005603663 romantic_rosalind[242879]: --> All data devices are unavailable
Jan 31 03:29:01 np0005603663 systemd[1]: libpod-97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5.scope: Deactivated successfully.
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.761291866 +0000 UTC m=+0.556713705 container died 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:29:01 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1-merged.mount: Deactivated successfully.
Jan 31 03:29:01 np0005603663 podman[242862]: 2026-01-31 08:29:01.800619179 +0000 UTC m=+0.596041078 container remove 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:29:01 np0005603663 systemd[1]: libpod-conmon-97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5.scope: Deactivated successfully.
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.230204903 +0000 UTC m=+0.034665853 container create bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:02 np0005603663 systemd[1]: Started libpod-conmon-bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8.scope.
Jan 31 03:29:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.30848525 +0000 UTC m=+0.112946230 container init bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.214965482 +0000 UTC m=+0.019426392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.316172217 +0000 UTC m=+0.120633167 container start bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:29:02 np0005603663 intelligent_allen[242990]: 167 167
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.320566842 +0000 UTC m=+0.125027862 container attach bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.32156222 +0000 UTC m=+0.126023140 container died bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:29:02 np0005603663 systemd[1]: libpod-bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8.scope: Deactivated successfully.
Jan 31 03:29:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f711bcf2227c7c7ddfd3c88b941c7b7cad4b70876e15acd827abb6238eb5357f-merged.mount: Deactivated successfully.
Jan 31 03:29:02 np0005603663 podman[242973]: 2026-01-31 08:29:02.364987149 +0000 UTC m=+0.169448099 container remove bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:29:02 np0005603663 systemd[1]: libpod-conmon-bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8.scope: Deactivated successfully.
Jan 31 03:29:02 np0005603663 podman[243013]: 2026-01-31 08:29:02.583565079 +0000 UTC m=+0.080831770 container create acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:29:02 np0005603663 systemd[1]: Started libpod-conmon-acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293.scope.
Jan 31 03:29:02 np0005603663 podman[243013]: 2026-01-31 08:29:02.532543434 +0000 UTC m=+0.029810205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:29:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:29:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:02 np0005603663 podman[243013]: 2026-01-31 08:29:02.653833708 +0000 UTC m=+0.151100419 container init acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:29:02 np0005603663 podman[243013]: 2026-01-31 08:29:02.662558115 +0000 UTC m=+0.159824806 container start acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:29:02 np0005603663 podman[243013]: 2026-01-31 08:29:02.665496468 +0000 UTC m=+0.162763179 container attach acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:29:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]: {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:    "0": [
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:        {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "devices": [
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "/dev/loop3"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            ],
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_name": "ceph_lv0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_size": "21470642176",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "name": "ceph_lv0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "tags": {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cluster_name": "ceph",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.crush_device_class": "",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.encrypted": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.objectstore": "bluestore",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osd_id": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.type": "block",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.vdo": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.with_tpm": "0"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            },
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "type": "block",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "vg_name": "ceph_vg0"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:        }
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:    ],
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:    "1": [
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:        {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "devices": [
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "/dev/loop4"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            ],
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_name": "ceph_lv1",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_size": "21470642176",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "name": "ceph_lv1",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "tags": {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cluster_name": "ceph",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.crush_device_class": "",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.encrypted": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.objectstore": "bluestore",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osd_id": "1",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.type": "block",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.vdo": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.with_tpm": "0"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            },
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "type": "block",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "vg_name": "ceph_vg1"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:        }
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:    ],
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:    "2": [
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:        {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "devices": [
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "/dev/loop5"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            ],
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_name": "ceph_lv2",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_size": "21470642176",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "name": "ceph_lv2",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "tags": {
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.cluster_name": "ceph",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.crush_device_class": "",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.encrypted": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.objectstore": "bluestore",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osd_id": "2",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.type": "block",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.vdo": "0",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:                "ceph.with_tpm": "0"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            },
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "type": "block",
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:            "vg_name": "ceph_vg2"
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:        }
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]:    ]
Jan 31 03:29:02 np0005603663 unruffled_turing[243030]: }
Jan 31 03:29:02 np0005603663 systemd[1]: libpod-acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293.scope: Deactivated successfully.
Jan 31 03:29:02 np0005603663 podman[243013]: 2026-01-31 08:29:02.970184196 +0000 UTC m=+0.467450927 container died acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:29:03 np0005603663 systemd[1]: var-lib-containers-storage-overlay-aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b-merged.mount: Deactivated successfully.
Jan 31 03:29:03 np0005603663 podman[243013]: 2026-01-31 08:29:03.148557086 +0000 UTC m=+0.645823787 container remove acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:29:03 np0005603663 systemd[1]: libpod-conmon-acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293.scope: Deactivated successfully.
Jan 31 03:29:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:03 np0005603663 podman[243113]: 2026-01-31 08:29:03.572750607 +0000 UTC m=+0.038404788 container create 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:29:03 np0005603663 systemd[1]: Started libpod-conmon-45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967.scope.
Jan 31 03:29:03 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:29:03 np0005603663 podman[243113]: 2026-01-31 08:29:03.551321681 +0000 UTC m=+0.016975862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:29:03 np0005603663 podman[243113]: 2026-01-31 08:29:03.654760239 +0000 UTC m=+0.120414430 container init 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:03 np0005603663 podman[243113]: 2026-01-31 08:29:03.665835893 +0000 UTC m=+0.131490094 container start 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:29:03 np0005603663 podman[243113]: 2026-01-31 08:29:03.670005801 +0000 UTC m=+0.135659982 container attach 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:29:03 np0005603663 affectionate_ganguly[243129]: 167 167
Jan 31 03:29:03 np0005603663 systemd[1]: libpod-45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967.scope: Deactivated successfully.
Jan 31 03:29:03 np0005603663 podman[243113]: 2026-01-31 08:29:03.671825983 +0000 UTC m=+0.137480184 container died 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:29:03 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b0d03bdbf577f1b33027f84d9b88eb8de7b8354cb5b26bcf8d6319e90ae08a88-merged.mount: Deactivated successfully.
Jan 31 03:29:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:04 np0005603663 podman[243113]: 2026-01-31 08:29:04.356110468 +0000 UTC m=+0.821764629 container remove 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:29:04 np0005603663 systemd[1]: libpod-conmon-45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967.scope: Deactivated successfully.
Jan 31 03:29:04 np0005603663 podman[243154]: 2026-01-31 08:29:04.496340998 +0000 UTC m=+0.024687910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:29:04 np0005603663 podman[243154]: 2026-01-31 08:29:04.613515946 +0000 UTC m=+0.141862828 container create a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:04 np0005603663 systemd[1]: Started libpod-conmon-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope.
Jan 31 03:29:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:29:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:04 np0005603663 podman[243154]: 2026-01-31 08:29:04.880389713 +0000 UTC m=+0.408736595 container init a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:29:04 np0005603663 podman[243154]: 2026-01-31 08:29:04.890380676 +0000 UTC m=+0.418727558 container start a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 03:29:04 np0005603663 podman[243154]: 2026-01-31 08:29:04.953866523 +0000 UTC m=+0.482213435 container attach a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:29:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:05 np0005603663 lvm[243248]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:29:05 np0005603663 lvm[243248]: VG ceph_vg0 finished
Jan 31 03:29:05 np0005603663 lvm[243249]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:29:05 np0005603663 lvm[243249]: VG ceph_vg1 finished
Jan 31 03:29:05 np0005603663 lvm[243251]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:29:05 np0005603663 lvm[243251]: VG ceph_vg2 finished
Jan 31 03:29:05 np0005603663 hungry_tesla[243170]: {}
Jan 31 03:29:05 np0005603663 systemd[1]: libpod-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope: Deactivated successfully.
Jan 31 03:29:05 np0005603663 systemd[1]: libpod-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope: Consumed 1.159s CPU time.
Jan 31 03:29:05 np0005603663 podman[243154]: 2026-01-31 08:29:05.71905495 +0000 UTC m=+1.247401892 container died a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:29:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202-merged.mount: Deactivated successfully.
Jan 31 03:29:06 np0005603663 podman[243154]: 2026-01-31 08:29:06.46460851 +0000 UTC m=+1.992955432 container remove a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:29:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:29:06 np0005603663 systemd[1]: libpod-conmon-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope: Deactivated successfully.
Jan 31 03:29:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:29:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.342 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.342 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.368 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.370 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.370 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:29:07 np0005603663 nova_compute[238824]: 2026-01-31 08:29:07.462 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:07 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:07 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:29:08 np0005603663 nova_compute[238824]: 2026-01-31 08:29:08.550 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:09 np0005603663 nova_compute[238824]: 2026-01-31 08:29:09.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.367 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.367 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:29:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814591676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:29:10 np0005603663 nova_compute[238824]: 2026-01-31 08:29:10.965 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.131 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.132 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.132 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.132 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.322 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.322 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.341 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:29:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168732476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.855 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.860 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.876 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.878 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:29:11 np0005603663 nova_compute[238824]: 2026-01-31 08:29:11.878 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:12 np0005603663 nova_compute[238824]: 2026-01-31 08:29:12.873 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:12 np0005603663 nova_compute[238824]: 2026-01-31 08:29:12.874 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:12 np0005603663 nova_compute[238824]: 2026-01-31 08:29:12.874 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:29:12 np0005603663 nova_compute[238824]: 2026-01-31 08:29:12.874 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:29:12 np0005603663 nova_compute[238824]: 2026-01-31 08:29:12.889 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:29:12 np0005603663 nova_compute[238824]: 2026-01-31 08:29:12.889 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:13 np0005603663 nova_compute[238824]: 2026-01-31 08:29:13.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:13 np0005603663 nova_compute[238824]: 2026-01-31 08:29:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:13 np0005603663 nova_compute[238824]: 2026-01-31 08:29:13.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:29:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:29:17.887 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:29:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:29:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:29:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538313178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:29:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:29:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538313178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:29:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 03:29:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 03:29:28 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 03:29:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:29 np0005603663 podman[243336]: 2026-01-31 08:29:29.161611348 +0000 UTC m=+0.054275241 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:29:29 np0005603663 podman[243335]: 2026-01-31 08:29:29.209429314 +0000 UTC m=+0.101140400 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:29:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:29:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 03:29:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 03:29:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 03:29:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 383 B/s wr, 3 op/s
Jan 31 03:29:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:29:31
Jan 31 03:29:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:29:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:29:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'backups', '.mgr']
Jan 31 03:29:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:29:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 03:29:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 03:29:32 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 03:29:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:29:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 03:29:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 31 03:29:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 03:29:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 03:29:37 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 03:29:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Jan 31 03:29:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 03:29:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 03:29:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.6 MiB/s wr, 32 op/s
Jan 31 03:29:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 03:29:39 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 03:29:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.1 MiB/s wr, 33 op/s
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659181352284859 of space, bias 1.0, pg target 0.19977544056854576 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6927621847268092e-06 of space, bias 4.0, pg target 0.003231314621672171 quantized to 16 (current 16)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:29:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:29:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 552 KiB/s wr, 10 op/s
Jan 31 03:29:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 03:29:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 03:29:46 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 03:29:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 569 KiB/s wr, 11 op/s
Jan 31 03:29:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 463 KiB/s wr, 9 op/s
Jan 31 03:29:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 03:29:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 03:29:52 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 03:29:52 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 03:29:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 03:29:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 03:29:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 03:29:54 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 03:29:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 31 03:29:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 4.9 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Jan 31 03:29:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 4.9 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 26 op/s
Jan 31 03:29:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 03:29:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 03:29:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 03:30:00 np0005603663 podman[243385]: 2026-01-31 08:30:00.178979205 +0000 UTC m=+0.056065981 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:30:00 np0005603663 podman[243384]: 2026-01-31 08:30:00.230038453 +0000 UTC m=+0.107539230 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:30:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 03:30:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 31 03:30:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Jan 31 03:30:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 4 op/s
Jan 31 03:30:07 np0005603663 nova_compute[238824]: 2026-01-31 08:30:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:30:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:30:08 np0005603663 podman[243571]: 2026-01-31 08:30:08.090712083 +0000 UTC m=+0.019123194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:30:08 np0005603663 podman[243571]: 2026-01-31 08:30:08.306355677 +0000 UTC m=+0.234766808 container create f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:30:08 np0005603663 systemd[1]: Started libpod-conmon-f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919.scope.
Jan 31 03:30:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:30:08 np0005603663 podman[243571]: 2026-01-31 08:30:08.738400719 +0000 UTC m=+0.666811910 container init f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:30:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:30:08 np0005603663 podman[243571]: 2026-01-31 08:30:08.747833387 +0000 UTC m=+0.676244478 container start f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:30:08 np0005603663 unruffled_mahavira[243588]: 167 167
Jan 31 03:30:08 np0005603663 systemd[1]: libpod-f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919.scope: Deactivated successfully.
Jan 31 03:30:08 np0005603663 podman[243571]: 2026-01-31 08:30:08.885245783 +0000 UTC m=+0.813656904 container attach f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:30:08 np0005603663 podman[243571]: 2026-01-31 08:30:08.886591121 +0000 UTC m=+0.815002232 container died f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:30:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 4 op/s
Jan 31 03:30:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c629b02b02c06c374517bb4bc934fc410c928146fc2c2cc6cd82cf915008fb97-merged.mount: Deactivated successfully.
Jan 31 03:30:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:10 np0005603663 podman[243571]: 2026-01-31 08:30:10.034050662 +0000 UTC m=+1.962461773 container remove f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:10 np0005603663 systemd[1]: libpod-conmon-f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919.scope: Deactivated successfully.
Jan 31 03:30:10 np0005603663 podman[243610]: 2026-01-31 08:30:10.173418154 +0000 UTC m=+0.028238291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:30:10 np0005603663 podman[243610]: 2026-01-31 08:30:10.314991929 +0000 UTC m=+0.169812026 container create 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:30:10 np0005603663 nova_compute[238824]: 2026-01-31 08:30:10.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:10 np0005603663 systemd[1]: Started libpod-conmon-944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1.scope.
Jan 31 03:30:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:30:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:10 np0005603663 podman[243610]: 2026-01-31 08:30:10.844829125 +0000 UTC m=+0.699649202 container init 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:30:10 np0005603663 podman[243610]: 2026-01-31 08:30:10.85169071 +0000 UTC m=+0.706510767 container start 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:10 np0005603663 podman[243610]: 2026-01-31 08:30:10.946048116 +0000 UTC m=+0.800868173 container attach 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:30:11 np0005603663 vigilant_elgamal[243627]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:30:11 np0005603663 vigilant_elgamal[243627]: --> All data devices are unavailable
Jan 31 03:30:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 90 B/s rd, 0 B/s wr, 0 op/s
Jan 31 03:30:11 np0005603663 systemd[1]: libpod-944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1.scope: Deactivated successfully.
Jan 31 03:30:11 np0005603663 podman[243610]: 2026-01-31 08:30:11.334405099 +0000 UTC m=+1.189225176 container died 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.361 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.362 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.363 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d-merged.mount: Deactivated successfully.
Jan 31 03:30:11 np0005603663 podman[243610]: 2026-01-31 08:30:11.412927395 +0000 UTC m=+1.267747452 container remove 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:30:11 np0005603663 systemd[1]: libpod-conmon-944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1.scope: Deactivated successfully.
Jan 31 03:30:11 np0005603663 podman[243741]: 2026-01-31 08:30:11.871075867 +0000 UTC m=+0.063160001 container create 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:30:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779464899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:30:11 np0005603663 systemd[1]: Started libpod-conmon-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope.
Jan 31 03:30:11 np0005603663 podman[243741]: 2026-01-31 08:30:11.82606703 +0000 UTC m=+0.018151194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:30:11 np0005603663 nova_compute[238824]: 2026-01-31 08:30:11.950 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:30:11 np0005603663 podman[243741]: 2026-01-31 08:30:11.983079233 +0000 UTC m=+0.175163387 container init 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:30:11 np0005603663 podman[243741]: 2026-01-31 08:30:11.990938576 +0000 UTC m=+0.183022710 container start 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:30:11 np0005603663 great_hoover[243759]: 167 167
Jan 31 03:30:11 np0005603663 systemd[1]: libpod-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope: Deactivated successfully.
Jan 31 03:30:11 np0005603663 conmon[243759]: conmon 98aa6a036b5e5d4d253f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope/container/memory.events
Jan 31 03:30:12 np0005603663 podman[243741]: 2026-01-31 08:30:12.01401174 +0000 UTC m=+0.206095894 container attach 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:30:12 np0005603663 podman[243741]: 2026-01-31 08:30:12.014509164 +0000 UTC m=+0.206593308 container died 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:30:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-386e94464947996c547e95d37a58faa5f2559efe3d2f86633e7d5e58f2e687d2-merged.mount: Deactivated successfully.
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.103 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.105 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.105 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.105 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:12 np0005603663 podman[243741]: 2026-01-31 08:30:12.125854602 +0000 UTC m=+0.317938736 container remove 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:30:12 np0005603663 systemd[1]: libpod-conmon-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope: Deactivated successfully.
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.174 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.175 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.242 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.290080449 +0000 UTC m=+0.082985674 container create 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.227591517 +0000 UTC m=+0.020496742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:30:12 np0005603663 systemd[1]: Started libpod-conmon-713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141.scope.
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.331 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.331 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:30:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:30:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.422368971 +0000 UTC m=+0.215274206 container init 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.428680329 +0000 UTC m=+0.221585544 container start 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.439409494 +0000 UTC m=+0.232314699 container attach 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.505 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.525 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:30:12 np0005603663 nova_compute[238824]: 2026-01-31 08:30:12.539 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:12 np0005603663 loving_darwin[243801]: {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:    "0": [
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:        {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "devices": [
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "/dev/loop3"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            ],
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_name": "ceph_lv0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_size": "21470642176",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "name": "ceph_lv0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "tags": {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cluster_name": "ceph",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.crush_device_class": "",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.encrypted": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.objectstore": "bluestore",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osd_id": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.type": "block",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.vdo": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.with_tpm": "0"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            },
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "type": "block",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "vg_name": "ceph_vg0"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:        }
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:    ],
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:    "1": [
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:        {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "devices": [
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "/dev/loop4"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            ],
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_name": "ceph_lv1",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_size": "21470642176",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "name": "ceph_lv1",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "tags": {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cluster_name": "ceph",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.crush_device_class": "",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.encrypted": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.objectstore": "bluestore",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osd_id": "1",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.type": "block",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.vdo": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.with_tpm": "0"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            },
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "type": "block",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "vg_name": "ceph_vg1"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:        }
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:    ],
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:    "2": [
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:        {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "devices": [
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "/dev/loop5"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            ],
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_name": "ceph_lv2",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_size": "21470642176",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "name": "ceph_lv2",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "tags": {
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.cluster_name": "ceph",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.crush_device_class": "",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.encrypted": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.objectstore": "bluestore",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osd_id": "2",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.type": "block",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.vdo": "0",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:                "ceph.with_tpm": "0"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            },
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "type": "block",
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:            "vg_name": "ceph_vg2"
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:        }
Jan 31 03:30:12 np0005603663 loving_darwin[243801]:    ]
Jan 31 03:30:12 np0005603663 loving_darwin[243801]: }
Jan 31 03:30:12 np0005603663 systemd[1]: libpod-713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141.scope: Deactivated successfully.
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.744942098 +0000 UTC m=+0.537847293 container died 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:30:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832-merged.mount: Deactivated successfully.
Jan 31 03:30:12 np0005603663 podman[243785]: 2026-01-31 08:30:12.840623552 +0000 UTC m=+0.633528747 container remove 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:12 np0005603663 systemd[1]: libpod-conmon-713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141.scope: Deactivated successfully.
Jan 31 03:30:13 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:30:13 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817873369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:30:13 np0005603663 nova_compute[238824]: 2026-01-31 08:30:13.072 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:13 np0005603663 nova_compute[238824]: 2026-01-31 08:30:13.086 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:30:13 np0005603663 nova_compute[238824]: 2026-01-31 08:30:13.101 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:30:13 np0005603663 nova_compute[238824]: 2026-01-31 08:30:13.103 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:30:13 np0005603663 nova_compute[238824]: 2026-01-31 08:30:13.103 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.294197834 +0000 UTC m=+0.051278115 container create 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:30:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:13 np0005603663 systemd[1]: Started libpod-conmon-4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392.scope.
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.261891198 +0000 UTC m=+0.018971479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:30:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.486314422 +0000 UTC m=+0.243394703 container init 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.492043175 +0000 UTC m=+0.249123466 container start 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:30:13 np0005603663 zealous_buck[243923]: 167 167
Jan 31 03:30:13 np0005603663 systemd[1]: libpod-4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392.scope: Deactivated successfully.
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.64034176 +0000 UTC m=+0.397422021 container attach 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.641161913 +0000 UTC m=+0.398242184 container died 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:30:13 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bf6ca8c53e00456ab5fce6fbbb0cdacd9b7ebd003df2c58bc494f14ec7d9a9fe-merged.mount: Deactivated successfully.
Jan 31 03:30:13 np0005603663 podman[243907]: 2026-01-31 08:30:13.838444798 +0000 UTC m=+0.595525049 container remove 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:13 np0005603663 systemd[1]: libpod-conmon-4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392.scope: Deactivated successfully.
Jan 31 03:30:14 np0005603663 podman[243949]: 2026-01-31 08:30:14.001994046 +0000 UTC m=+0.071134248 container create a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:30:14 np0005603663 podman[243949]: 2026-01-31 08:30:13.953558243 +0000 UTC m=+0.022698515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:30:14 np0005603663 systemd[1]: Started libpod-conmon-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope.
Jan 31 03:30:14 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:30:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:14 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:14 np0005603663 podman[243949]: 2026-01-31 08:30:14.147026649 +0000 UTC m=+0.216166891 container init a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:30:14 np0005603663 podman[243949]: 2026-01-31 08:30:14.155412247 +0000 UTC m=+0.224552429 container start a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:30:14 np0005603663 podman[243949]: 2026-01-31 08:30:14.178393189 +0000 UTC m=+0.247533381 container attach a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:30:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:14 np0005603663 lvm[244042]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:30:14 np0005603663 lvm[244042]: VG ceph_vg0 finished
Jan 31 03:30:14 np0005603663 lvm[244044]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:30:14 np0005603663 lvm[244044]: VG ceph_vg1 finished
Jan 31 03:30:14 np0005603663 lvm[244046]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:30:14 np0005603663 lvm[244046]: VG ceph_vg2 finished
Jan 31 03:30:14 np0005603663 practical_swirles[243965]: {}
Jan 31 03:30:14 np0005603663 podman[243949]: 2026-01-31 08:30:14.942096616 +0000 UTC m=+1.011236798 container died a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:30:14 np0005603663 systemd[1]: libpod-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope: Deactivated successfully.
Jan 31 03:30:14 np0005603663 systemd[1]: libpod-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope: Consumed 1.188s CPU time.
Jan 31 03:30:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769-merged.mount: Deactivated successfully.
Jan 31 03:30:15 np0005603663 podman[243949]: 2026-01-31 08:30:15.047451264 +0000 UTC m=+1.116591446 container remove a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:30:15 np0005603663 systemd[1]: libpod-conmon-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope: Deactivated successfully.
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.099 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.101 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:30:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:30:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:30:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.131 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.131 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.131 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.145 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.146 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.146 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.146 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:30:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:15 np0005603663 nova_compute[238824]: 2026-01-31 08:30:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:30:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:30:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:30:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:30:17.889 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:30:17.889 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:30:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441845726' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:30:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:30:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441845726' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:30:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:31 np0005603663 podman[244090]: 2026-01-31 08:30:31.178949832 +0000 UTC m=+0.062512524 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:30:31 np0005603663 podman[244089]: 2026-01-31 08:30:31.227971472 +0000 UTC m=+0.111811282 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 03:30:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:30:31
Jan 31 03:30:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:30:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:30:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Jan 31 03:30:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:30:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:30:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:30:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:30:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:02 np0005603663 podman[244136]: 2026-01-31 08:31:02.176869446 +0000 UTC m=+0.050160374 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 03:31:02 np0005603663 podman[244135]: 2026-01-31 08:31:02.225073983 +0000 UTC m=+0.095912031 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:31:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:09 np0005603663 nova_compute[238824]: 2026-01-31 08:31:09.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.374 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.374 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.374 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:31:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975075714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:31:11 np0005603663 nova_compute[238824]: 2026-01-31 08:31:11.870 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.018 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.019 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5137MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.019 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.020 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.086 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.086 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.101 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:31:12 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572898568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.659 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.664 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.680 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.684 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:31:12 np0005603663 nova_compute[238824]: 2026-01-31 08:31:12.684 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:13 np0005603663 nova_compute[238824]: 2026-01-31 08:31:13.686 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:13 np0005603663 nova_compute[238824]: 2026-01-31 08:31:13.686 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:13 np0005603663 nova_compute[238824]: 2026-01-31 08:31:13.687 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:13 np0005603663 nova_compute[238824]: 2026-01-31 08:31:13.687 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:31:14 np0005603663 nova_compute[238824]: 2026-01-31 08:31:14.335 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:15 np0005603663 nova_compute[238824]: 2026-01-31 08:31:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:15 np0005603663 nova_compute[238824]: 2026-01-31 08:31:15.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:31:15 np0005603663 nova_compute[238824]: 2026-01-31 08:31:15.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:31:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:15 np0005603663 nova_compute[238824]: 2026-01-31 08:31:15.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:31:15 np0005603663 nova_compute[238824]: 2026-01-31 08:31:15.352 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:31:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:31:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:31:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:31:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.46035623 +0000 UTC m=+0.089522740 container create eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.402676514 +0000 UTC m=+0.031843014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:31:16 np0005603663 systemd[1]: Started libpod-conmon-eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b.scope.
Jan 31 03:31:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.561904459 +0000 UTC m=+0.191070929 container init eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.568135396 +0000 UTC m=+0.197301916 container start eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:31:16 np0005603663 adoring_lumiere[244383]: 167 167
Jan 31 03:31:16 np0005603663 systemd[1]: libpod-eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b.scope: Deactivated successfully.
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.587159286 +0000 UTC m=+0.216325886 container attach eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.587710581 +0000 UTC m=+0.216877071 container died eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:31:16 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:31:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-cc615f14ce799d1a40cce9b5be9ced2762ca90445e89473f22e7b3930c30a775-merged.mount: Deactivated successfully.
Jan 31 03:31:16 np0005603663 podman[244367]: 2026-01-31 08:31:16.713451247 +0000 UTC m=+0.342617767 container remove eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:31:16 np0005603663 systemd[1]: libpod-conmon-eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b.scope: Deactivated successfully.
Jan 31 03:31:16 np0005603663 podman[244409]: 2026-01-31 08:31:16.889010855 +0000 UTC m=+0.068199004 container create a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:31:16 np0005603663 podman[244409]: 2026-01-31 08:31:16.8450734 +0000 UTC m=+0.024261539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:31:16 np0005603663 systemd[1]: Started libpod-conmon-a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f.scope.
Jan 31 03:31:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:31:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:17 np0005603663 podman[244409]: 2026-01-31 08:31:17.01047751 +0000 UTC m=+0.189665699 container init a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:31:17 np0005603663 podman[244409]: 2026-01-31 08:31:17.019918478 +0000 UTC m=+0.199106647 container start a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:31:17 np0005603663 podman[244409]: 2026-01-31 08:31:17.100519104 +0000 UTC m=+0.279707243 container attach a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:31:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:17 np0005603663 nova_compute[238824]: 2026-01-31 08:31:17.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:17 np0005603663 gifted_bouman[244426]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:31:17 np0005603663 gifted_bouman[244426]: --> All data devices are unavailable
Jan 31 03:31:17 np0005603663 systemd[1]: libpod-a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f.scope: Deactivated successfully.
Jan 31 03:31:17 np0005603663 podman[244446]: 2026-01-31 08:31:17.476230438 +0000 UTC m=+0.022882300 container died a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:31:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2-merged.mount: Deactivated successfully.
Jan 31 03:31:17 np0005603663 podman[244446]: 2026-01-31 08:31:17.515761059 +0000 UTC m=+0.062412921 container remove a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:31:17 np0005603663 systemd[1]: libpod-conmon-a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f.scope: Deactivated successfully.
Jan 31 03:31:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:31:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:31:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:31:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:17 np0005603663 podman[244523]: 2026-01-31 08:31:17.941280416 +0000 UTC m=+0.047573380 container create 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:31:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:31:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116771768' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:31:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:31:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116771768' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:31:17 np0005603663 systemd[1]: Started libpod-conmon-707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c.scope.
Jan 31 03:31:18 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:31:18 np0005603663 podman[244523]: 2026-01-31 08:31:17.920015813 +0000 UTC m=+0.026308837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:31:18 np0005603663 podman[244523]: 2026-01-31 08:31:18.022360395 +0000 UTC m=+0.128653379 container init 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:31:18 np0005603663 podman[244523]: 2026-01-31 08:31:18.030052924 +0000 UTC m=+0.136345858 container start 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:31:18 np0005603663 silly_einstein[244540]: 167 167
Jan 31 03:31:18 np0005603663 systemd[1]: libpod-707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c.scope: Deactivated successfully.
Jan 31 03:31:18 np0005603663 podman[244523]: 2026-01-31 08:31:18.036036393 +0000 UTC m=+0.142329337 container attach 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:31:18 np0005603663 podman[244523]: 2026-01-31 08:31:18.036534307 +0000 UTC m=+0.142827261 container died 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:31:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-47855452fe1dcbe7165ace8db6e3fcd6c0c2cb83d087e5e962c252d4c2748333-merged.mount: Deactivated successfully.
Jan 31 03:31:18 np0005603663 podman[244523]: 2026-01-31 08:31:18.109564408 +0000 UTC m=+0.215857342 container remove 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:31:18 np0005603663 systemd[1]: libpod-conmon-707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c.scope: Deactivated successfully.
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.290502839 +0000 UTC m=+0.062107342 container create c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:31:18 np0005603663 systemd[1]: Started libpod-conmon-c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32.scope.
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.257975447 +0000 UTC m=+0.029580030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:31:18 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:31:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:18 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.377299301 +0000 UTC m=+0.148903804 container init c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.383829826 +0000 UTC m=+0.155434319 container start c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.3948877 +0000 UTC m=+0.166492203 container attach c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:31:18 np0005603663 focused_austin[244580]: {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:    "0": [
Jan 31 03:31:18 np0005603663 focused_austin[244580]:        {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "devices": [
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "/dev/loop3"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            ],
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_name": "ceph_lv0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_size": "21470642176",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "name": "ceph_lv0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "tags": {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cluster_name": "ceph",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.crush_device_class": "",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.encrypted": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.objectstore": "bluestore",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osd_id": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.type": "block",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.vdo": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.with_tpm": "0"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            },
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "type": "block",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "vg_name": "ceph_vg0"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:        }
Jan 31 03:31:18 np0005603663 focused_austin[244580]:    ],
Jan 31 03:31:18 np0005603663 focused_austin[244580]:    "1": [
Jan 31 03:31:18 np0005603663 focused_austin[244580]:        {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "devices": [
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "/dev/loop4"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            ],
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_name": "ceph_lv1",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_size": "21470642176",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "name": "ceph_lv1",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "tags": {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cluster_name": "ceph",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.crush_device_class": "",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.encrypted": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.objectstore": "bluestore",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osd_id": "1",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.type": "block",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.vdo": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.with_tpm": "0"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            },
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "type": "block",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "vg_name": "ceph_vg1"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:        }
Jan 31 03:31:18 np0005603663 focused_austin[244580]:    ],
Jan 31 03:31:18 np0005603663 focused_austin[244580]:    "2": [
Jan 31 03:31:18 np0005603663 focused_austin[244580]:        {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "devices": [
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "/dev/loop5"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            ],
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_name": "ceph_lv2",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_size": "21470642176",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "name": "ceph_lv2",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "tags": {
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.cluster_name": "ceph",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.crush_device_class": "",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.encrypted": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.objectstore": "bluestore",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osd_id": "2",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.type": "block",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.vdo": "0",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:                "ceph.with_tpm": "0"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            },
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "type": "block",
Jan 31 03:31:18 np0005603663 focused_austin[244580]:            "vg_name": "ceph_vg2"
Jan 31 03:31:18 np0005603663 focused_austin[244580]:        }
Jan 31 03:31:18 np0005603663 focused_austin[244580]:    ]
Jan 31 03:31:18 np0005603663 focused_austin[244580]: }
Jan 31 03:31:18 np0005603663 systemd[1]: libpod-c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32.scope: Deactivated successfully.
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.680678224 +0000 UTC m=+0.452282717 container died c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:31:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75-merged.mount: Deactivated successfully.
Jan 31 03:31:18 np0005603663 podman[244563]: 2026-01-31 08:31:18.739551964 +0000 UTC m=+0.511156467 container remove c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:31:18 np0005603663 systemd[1]: libpod-conmon-c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32.scope: Deactivated successfully.
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.184854592 +0000 UTC m=+0.042687402 container create 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:31:19 np0005603663 systemd[1]: Started libpod-conmon-3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48.scope.
Jan 31 03:31:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.262764441 +0000 UTC m=+0.120597271 container init 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.168534779 +0000 UTC m=+0.026367619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.269953925 +0000 UTC m=+0.127786735 container start 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.274056571 +0000 UTC m=+0.131889412 container attach 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:31:19 np0005603663 brave_gagarin[244679]: 167 167
Jan 31 03:31:19 np0005603663 systemd[1]: libpod-3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48.scope: Deactivated successfully.
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.277466438 +0000 UTC m=+0.135299278 container died 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:31:19 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5cc15e7f2efa22ca95ef0d839e810d27dea9ab07ffd5432073788cd3c0de90c4-merged.mount: Deactivated successfully.
Jan 31 03:31:19 np0005603663 podman[244663]: 2026-01-31 08:31:19.320764856 +0000 UTC m=+0.178597666 container remove 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:31:19 np0005603663 systemd[1]: libpod-conmon-3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48.scope: Deactivated successfully.
Jan 31 03:31:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:19 np0005603663 podman[244703]: 2026-01-31 08:31:19.45171026 +0000 UTC m=+0.042463276 container create 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:31:19 np0005603663 systemd[1]: Started libpod-conmon-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope.
Jan 31 03:31:19 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:31:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:19 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:19 np0005603663 podman[244703]: 2026-01-31 08:31:19.434150562 +0000 UTC m=+0.024903598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:31:19 np0005603663 podman[244703]: 2026-01-31 08:31:19.536536865 +0000 UTC m=+0.127289891 container init 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:31:19 np0005603663 podman[244703]: 2026-01-31 08:31:19.548945657 +0000 UTC m=+0.139698703 container start 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:31:19 np0005603663 podman[244703]: 2026-01-31 08:31:19.556865602 +0000 UTC m=+0.147618638 container attach 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:31:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:20 np0005603663 lvm[244799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:31:20 np0005603663 lvm[244800]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:31:20 np0005603663 lvm[244800]: VG ceph_vg1 finished
Jan 31 03:31:20 np0005603663 lvm[244799]: VG ceph_vg0 finished
Jan 31 03:31:20 np0005603663 lvm[244802]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:31:20 np0005603663 lvm[244802]: VG ceph_vg2 finished
Jan 31 03:31:20 np0005603663 magical_shannon[244720]: {}
Jan 31 03:31:20 np0005603663 systemd[1]: libpod-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope: Deactivated successfully.
Jan 31 03:31:20 np0005603663 podman[244703]: 2026-01-31 08:31:20.394816044 +0000 UTC m=+0.985569160 container died 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:31:20 np0005603663 systemd[1]: libpod-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope: Consumed 1.237s CPU time.
Jan 31 03:31:20 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c-merged.mount: Deactivated successfully.
Jan 31 03:31:20 np0005603663 podman[244703]: 2026-01-31 08:31:20.438966496 +0000 UTC m=+1.029719552 container remove 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:31:20 np0005603663 systemd[1]: libpod-conmon-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope: Deactivated successfully.
Jan 31 03:31:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:31:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:31:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:31:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:31:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:31:21 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:31:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:31:31
Jan 31 03:31:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:31:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:31:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms']
Jan 31 03:31:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:31:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:31:33 np0005603663 podman[244843]: 2026-01-31 08:31:33.169092653 +0000 UTC m=+0.059271399 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:31:33 np0005603663 podman[244842]: 2026-01-31 08:31:33.189382708 +0000 UTC m=+0.080929323 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 03:31:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:33 np0005603663 ceph-osd[85971]: bluestore.MempoolThread fragmentation_score=0.000115 took=0.000017s
Jan 31 03:31:33 np0005603663 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000027s
Jan 31 03:31:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000028s
Jan 31 03:31:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:39 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:31:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:49 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:54 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:31:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:04 np0005603663 podman[244886]: 2026-01-31 08:32:04.191442375 +0000 UTC m=+0.075233641 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:32:04 np0005603663 podman[244885]: 2026-01-31 08:32:04.21313902 +0000 UTC m=+0.099004885 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 03:32:04 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:09 np0005603663 nova_compute[238824]: 2026-01-31 08:32:09.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.397 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:32:13 np0005603663 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:32:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410728628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.042 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.226 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.228 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.228 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.307 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.308 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:32:14 np0005603663 nova_compute[238824]: 2026-01-31 08:32:14.327 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452447454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:32:15 np0005603663 nova_compute[238824]: 2026-01-31 08:32:15.072 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:15 np0005603663 nova_compute[238824]: 2026-01-31 08:32:15.077 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:32:15 np0005603663 nova_compute[238824]: 2026-01-31 08:32:15.099 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:32:15 np0005603663 nova_compute[238824]: 2026-01-31 08:32:15.100 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:32:15 np0005603663 nova_compute[238824]: 2026-01-31 08:32:15.101 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:15.234906) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848335234965, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2031, "num_deletes": 253, "total_data_size": 3392812, "memory_usage": 3448168, "flush_reason": "Manual Compaction"}
Jan 31 03:32:15 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 03:32:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336112029, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1983624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16438, "largest_seqno": 18468, "table_properties": {"data_size": 1976835, "index_size": 3607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16924, "raw_average_key_size": 20, "raw_value_size": 1961819, "raw_average_value_size": 2372, "num_data_blocks": 165, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848119, "oldest_key_time": 1769848119, "file_creation_time": 1769848335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 877161 microseconds, and 3876 cpu microseconds.
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.112083) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1983624 bytes OK
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.112106) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.271797) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.271850) EVENT_LOG_v1 {"time_micros": 1769848336271840, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.271877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3384234, prev total WAL file size 3386485, number of live WAL files 2.
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.272762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1937KB)], [38(7936KB)]
Jan 31 03:32:16 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336272813, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10110589, "oldest_snapshot_seqno": -1}
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.096 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.096 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4477 keys, 8097248 bytes, temperature: kUnknown
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848337117975, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8097248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8066255, "index_size": 18723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 108263, "raw_average_key_size": 24, "raw_value_size": 7984230, "raw_average_value_size": 1783, "num_data_blocks": 794, "num_entries": 4477, "num_filter_entries": 4477, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.168 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.169 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.169 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.205 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.118232) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8097248 bytes
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.275680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 12.0 rd, 9.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.8 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4896, records dropped: 419 output_compression: NoCompression
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.275724) EVENT_LOG_v1 {"time_micros": 1769848337275706, "job": 18, "event": "compaction_finished", "compaction_time_micros": 845248, "compaction_time_cpu_micros": 24544, "output_level": 6, "num_output_files": 1, "total_output_size": 8097248, "num_input_records": 4896, "num_output_records": 4477, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848337276228, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848337277310, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.272660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:17 np0005603663 nova_compute[238824]: 2026-01-31 08:32:17.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:32:17.889 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:32:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:32:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119589343' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:32:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119589343' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:32:19 np0005603663 nova_compute[238824]: 2026-01-31 08:32:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:32:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:32:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:32:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:32:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:32:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:32:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:32:23 np0005603663 podman[245116]: 2026-01-31 08:32:23.894021412 +0000 UTC m=+0.021178571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:32:24 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:32:24 np0005603663 podman[245116]: 2026-01-31 08:32:24.407023659 +0000 UTC m=+0.534180728 container create c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:32:24 np0005603663 systemd[1]: Started libpod-conmon-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope.
Jan 31 03:32:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:32:24 np0005603663 podman[245116]: 2026-01-31 08:32:24.852785763 +0000 UTC m=+0.979942912 container init c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:32:24 np0005603663 podman[245116]: 2026-01-31 08:32:24.861081468 +0000 UTC m=+0.988238537 container start c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:32:24 np0005603663 inspiring_mendeleev[245133]: 167 167
Jan 31 03:32:24 np0005603663 systemd[1]: libpod-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope: Deactivated successfully.
Jan 31 03:32:24 np0005603663 conmon[245133]: conmon c344c7950536fa2a1887 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope/container/memory.events
Jan 31 03:32:25 np0005603663 podman[245116]: 2026-01-31 08:32:25.003908393 +0000 UTC m=+1.131065512 container attach c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:32:25 np0005603663 podman[245116]: 2026-01-31 08:32:25.004688785 +0000 UTC m=+1.131845854 container died c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:32:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:32:25 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:26.700947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848346701005, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 251, "total_data_size": 163909, "memory_usage": 170104, "flush_reason": "Manual Compaction"}
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848346869165, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 162760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18469, "largest_seqno": 18802, "table_properties": {"data_size": 160632, "index_size": 292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 156361, "raw_average_value_size": 530, "num_data_blocks": 13, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848336, "oldest_key_time": 1769848336, "file_creation_time": 1769848346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 168243 microseconds, and 1160 cpu microseconds.
Jan 31 03:32:26 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:26.869200) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 162760 bytes OK
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:26.869216) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086051) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086109) EVENT_LOG_v1 {"time_micros": 1769848347086099, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 161560, prev total WAL file size 162715, number of live WAL files 2.
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(158KB)], [41(7907KB)]
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347086696, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8260008, "oldest_snapshot_seqno": -1}
Jan 31 03:32:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-caf6d1304db9059e68b7f881624d33d8a0b19737a451f65e6b64cf12b0b4d054-merged.mount: Deactivated successfully.
Jan 31 03:32:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4261 keys, 6487523 bytes, temperature: kUnknown
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347580222, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6487523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6459522, "index_size": 16244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104502, "raw_average_key_size": 24, "raw_value_size": 6382840, "raw_average_value_size": 1497, "num_data_blocks": 680, "num_entries": 4261, "num_filter_entries": 4261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848347, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.580594) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6487523 bytes
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.906970) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.7 rd, 13.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.7 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(90.6) write-amplify(39.9) OK, records in: 4772, records dropped: 511 output_compression: NoCompression
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907012) EVENT_LOG_v1 {"time_micros": 1769848347906995, "job": 20, "event": "compaction_finished", "compaction_time_micros": 493667, "compaction_time_cpu_micros": 18988, "output_level": 6, "num_output_files": 1, "total_output_size": 6487523, "num_input_records": 4772, "num_output_records": 4261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347907202, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347907963, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:27 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.908001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:28 np0005603663 podman[245116]: 2026-01-31 08:32:28.851197619 +0000 UTC m=+4.978354688 container remove c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:32:28 np0005603663 systemd[1]: libpod-conmon-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope: Deactivated successfully.
Jan 31 03:32:29 np0005603663 podman[245158]: 2026-01-31 08:32:28.969760637 +0000 UTC m=+0.023937489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:32:29 np0005603663 podman[245158]: 2026-01-31 08:32:29.285655343 +0000 UTC m=+0.339832105 container create 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:32:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:29 np0005603663 systemd[1]: Started libpod-conmon-5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9.scope.
Jan 31 03:32:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:32:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:29 np0005603663 podman[245158]: 2026-01-31 08:32:29.991668458 +0000 UTC m=+1.045845240 container init 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:32:30 np0005603663 podman[245158]: 2026-01-31 08:32:30.00376593 +0000 UTC m=+1.057942692 container start 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:32:30 np0005603663 podman[245158]: 2026-01-31 08:32:30.224630225 +0000 UTC m=+1.278807017 container attach 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:32:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:30 np0005603663 serene_kowalevski[245174]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:32:30 np0005603663 serene_kowalevski[245174]: --> All data devices are unavailable
Jan 31 03:32:30 np0005603663 systemd[1]: libpod-5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9.scope: Deactivated successfully.
Jan 31 03:32:30 np0005603663 podman[245158]: 2026-01-31 08:32:30.466514055 +0000 UTC m=+1.520690817 container died 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 03:32:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c-merged.mount: Deactivated successfully.
Jan 31 03:32:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:32:31
Jan 31 03:32:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:32:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:32:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log']
Jan 31 03:32:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:32:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:32:33 np0005603663 podman[245158]: 2026-01-31 08:32:33.127926287 +0000 UTC m=+4.182103059 container remove 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:32:33 np0005603663 systemd[1]: libpod-conmon-5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9.scope: Deactivated successfully.
Jan 31 03:32:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:33 np0005603663 podman[245269]: 2026-01-31 08:32:33.548881929 +0000 UTC m=+0.026901663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:32:33 np0005603663 podman[245269]: 2026-01-31 08:32:33.827090818 +0000 UTC m=+0.305110492 container create faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:32:34 np0005603663 systemd[1]: Started libpod-conmon-faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c.scope.
Jan 31 03:32:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:32:34 np0005603663 podman[245269]: 2026-01-31 08:32:34.589564821 +0000 UTC m=+1.067584545 container init faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:32:34 np0005603663 podman[245269]: 2026-01-31 08:32:34.596738294 +0000 UTC m=+1.074757968 container start faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:32:34 np0005603663 nifty_payne[245285]: 167 167
Jan 31 03:32:34 np0005603663 systemd[1]: libpod-faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c.scope: Deactivated successfully.
Jan 31 03:32:34 np0005603663 podman[245269]: 2026-01-31 08:32:34.751569959 +0000 UTC m=+1.229589623 container attach faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:32:34 np0005603663 podman[245269]: 2026-01-31 08:32:34.752532427 +0000 UTC m=+1.230552081 container died faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:32:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f26e65f073405f01beb432397e55b051e66a8c79c0c77a76d824f2b28270318d-merged.mount: Deactivated successfully.
Jan 31 03:32:36 np0005603663 podman[245269]: 2026-01-31 08:32:36.737736497 +0000 UTC m=+3.215756131 container remove faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:32:36 np0005603663 systemd[1]: libpod-conmon-faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c.scope: Deactivated successfully.
Jan 31 03:32:36 np0005603663 podman[245286]: 2026-01-31 08:32:36.806455823 +0000 UTC m=+2.562363296 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:32:36 np0005603663 podman[245288]: 2026-01-31 08:32:36.915392318 +0000 UTC m=+2.671100855 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:32:37 np0005603663 podman[245350]: 2026-01-31 08:32:36.914859093 +0000 UTC m=+0.031138213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:32:37 np0005603663 podman[245350]: 2026-01-31 08:32:37.116512754 +0000 UTC m=+0.232791834 container create 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:32:37 np0005603663 systemd[1]: Started libpod-conmon-73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a.scope.
Jan 31 03:32:37 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:32:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:37 np0005603663 podman[245350]: 2026-01-31 08:32:37.722724972 +0000 UTC m=+0.839004062 container init 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:32:37 np0005603663 podman[245350]: 2026-01-31 08:32:37.729741581 +0000 UTC m=+0.846020691 container start 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]: {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:    "0": [
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:        {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "devices": [
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "/dev/loop3"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            ],
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_name": "ceph_lv0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_size": "21470642176",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "name": "ceph_lv0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "tags": {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cluster_name": "ceph",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.crush_device_class": "",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.encrypted": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.objectstore": "bluestore",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osd_id": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.type": "block",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.vdo": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.with_tpm": "0"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            },
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "type": "block",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "vg_name": "ceph_vg0"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:        }
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:    ],
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:    "1": [
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:        {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "devices": [
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "/dev/loop4"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            ],
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_name": "ceph_lv1",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_size": "21470642176",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "name": "ceph_lv1",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "tags": {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cluster_name": "ceph",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.crush_device_class": "",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.encrypted": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.objectstore": "bluestore",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osd_id": "1",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.type": "block",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.vdo": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.with_tpm": "0"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            },
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "type": "block",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "vg_name": "ceph_vg1"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:        }
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:    ],
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:    "2": [
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:        {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "devices": [
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "/dev/loop5"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            ],
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_name": "ceph_lv2",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_size": "21470642176",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "name": "ceph_lv2",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "tags": {
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.cluster_name": "ceph",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.crush_device_class": "",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.encrypted": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.objectstore": "bluestore",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osd_id": "2",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.type": "block",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.vdo": "0",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:                "ceph.with_tpm": "0"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            },
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "type": "block",
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:            "vg_name": "ceph_vg2"
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:        }
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]:    ]
Jan 31 03:32:38 np0005603663 sharp_lederberg[245372]: }
Jan 31 03:32:38 np0005603663 systemd[1]: libpod-73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a.scope: Deactivated successfully.
Jan 31 03:32:38 np0005603663 podman[245350]: 2026-01-31 08:32:38.08389784 +0000 UTC m=+1.200176960 container attach 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:32:38 np0005603663 podman[245350]: 2026-01-31 08:32:38.084613791 +0000 UTC m=+1.200892891 container died 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:32:38 np0005603663 systemd[1]: var-lib-containers-storage-overlay-494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9-merged.mount: Deactivated successfully.
Jan 31 03:32:38 np0005603663 podman[245350]: 2026-01-31 08:32:38.932226975 +0000 UTC m=+2.048506045 container remove 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:32:39 np0005603663 systemd[1]: libpod-conmon-73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a.scope: Deactivated successfully.
Jan 31 03:32:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:39 np0005603663 podman[245455]: 2026-01-31 08:32:39.376375023 +0000 UTC m=+0.023058894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:32:39 np0005603663 podman[245455]: 2026-01-31 08:32:39.660445208 +0000 UTC m=+0.307129059 container create 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:32:39 np0005603663 systemd[1]: Started libpod-conmon-5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f.scope.
Jan 31 03:32:39 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:32:39 np0005603663 podman[245455]: 2026-01-31 08:32:39.917180499 +0000 UTC m=+0.563864360 container init 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:32:39 np0005603663 podman[245455]: 2026-01-31 08:32:39.92429386 +0000 UTC m=+0.570977721 container start 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:32:39 np0005603663 musing_goldwasser[245471]: 167 167
Jan 31 03:32:39 np0005603663 systemd[1]: libpod-5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f.scope: Deactivated successfully.
Jan 31 03:32:40 np0005603663 podman[245455]: 2026-01-31 08:32:40.044827233 +0000 UTC m=+0.691511504 container attach 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:32:40 np0005603663 podman[245455]: 2026-01-31 08:32:40.045480842 +0000 UTC m=+0.692164733 container died 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:32:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:40 np0005603663 systemd[1]: var-lib-containers-storage-overlay-870f0b24ba848db99d8aa0a685d550cad6de8da3181389310208b8a3796cde58-merged.mount: Deactivated successfully.
Jan 31 03:32:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:41 np0005603663 podman[245455]: 2026-01-31 08:32:41.4666462 +0000 UTC m=+2.113330091 container remove 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:32:41 np0005603663 systemd[1]: libpod-conmon-5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f.scope: Deactivated successfully.
Jan 31 03:32:41 np0005603663 podman[245495]: 2026-01-31 08:32:41.622567076 +0000 UTC m=+0.028224651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:32:41 np0005603663 podman[245495]: 2026-01-31 08:32:41.759245676 +0000 UTC m=+0.164903201 container create 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:32:41 np0005603663 systemd[1]: Started libpod-conmon-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope.
Jan 31 03:32:41 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:32:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:41 np0005603663 podman[245495]: 2026-01-31 08:32:41.970595192 +0000 UTC m=+0.376252757 container init 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:32:41 np0005603663 podman[245495]: 2026-01-31 08:32:41.981171301 +0000 UTC m=+0.386828806 container start 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 03:32:42 np0005603663 podman[245495]: 2026-01-31 08:32:42.054086036 +0000 UTC m=+0.459743541 container attach 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:32:42 np0005603663 lvm[245590]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:32:42 np0005603663 lvm[245591]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:32:42 np0005603663 lvm[245591]: VG ceph_vg1 finished
Jan 31 03:32:42 np0005603663 lvm[245590]: VG ceph_vg0 finished
Jan 31 03:32:42 np0005603663 lvm[245593]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:32:42 np0005603663 lvm[245593]: VG ceph_vg2 finished
Jan 31 03:32:42 np0005603663 condescending_greider[245512]: {}
Jan 31 03:32:42 np0005603663 systemd[1]: libpod-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope: Deactivated successfully.
Jan 31 03:32:42 np0005603663 systemd[1]: libpod-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope: Consumed 1.099s CPU time.
Jan 31 03:32:42 np0005603663 podman[245495]: 2026-01-31 08:32:42.815752336 +0000 UTC m=+1.221409861 container died 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:32:43 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca-merged.mount: Deactivated successfully.
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:32:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:43 np0005603663 podman[245495]: 2026-01-31 08:32:43.472790412 +0000 UTC m=+1.878447937 container remove 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:32:43 np0005603663 systemd[1]: libpod-conmon-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope: Deactivated successfully.
Jan 31 03:32:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:32:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:32:43 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:32:43 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:32:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:32:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:32:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:32:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:07 np0005603663 podman[245634]: 2026-01-31 08:33:07.173292627 +0000 UTC m=+0.069907321 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:33:07 np0005603663 podman[245635]: 2026-01-31 08:33:07.18327858 +0000 UTC m=+0.074390958 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:33:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:09 np0005603663 nova_compute[238824]: 2026-01-31 08:33:09.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:14 np0005603663 nova_compute[238824]: 2026-01-31 08:33:14.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:14 np0005603663 nova_compute[238824]: 2026-01-31 08:33:14.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.373 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.374 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:33:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605119693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:33:15 np0005603663 nova_compute[238824]: 2026-01-31 08:33:15.929 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.067 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.068 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.068 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.068 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.153 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.153 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.172 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:33:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187874455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.652 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.658 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.683 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.686 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:33:16 np0005603663 nova_compute[238824]: 2026-01-31 08:33:16.687 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:17 np0005603663 nova_compute[238824]: 2026-01-31 08:33:17.682 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:17 np0005603663 nova_compute[238824]: 2026-01-31 08:33:17.683 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:17 np0005603663 nova_compute[238824]: 2026-01-31 08:33:17.683 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:33:17 np0005603663 nova_compute[238824]: 2026-01-31 08:33:17.683 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:33:17 np0005603663 nova_compute[238824]: 2026-01-31 08:33:17.710 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:33:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:33:17.891 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:33:17.891 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:33:17.892 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:33:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34115812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:33:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:33:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34115812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:33:18 np0005603663 nova_compute[238824]: 2026-01-31 08:33:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:21 np0005603663 nova_compute[238824]: 2026-01-31 08:33:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:33:31
Jan 31 03:33:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:33:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:33:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Jan 31 03:33:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:33:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:33:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:38 np0005603663 podman[245723]: 2026-01-31 08:33:38.175280295 +0000 UTC m=+0.059390910 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:33:38 np0005603663 podman[245722]: 2026-01-31 08:33:38.20803651 +0000 UTC m=+0.092098394 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:33:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:33:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:33:44 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:33:44 np0005603663 podman[245905]: 2026-01-31 08:33:44.958358095 +0000 UTC m=+0.059998697 container create 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:33:45 np0005603663 podman[245905]: 2026-01-31 08:33:44.920183466 +0000 UTC m=+0.021824108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:33:45 np0005603663 systemd[1]: Started libpod-conmon-947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54.scope.
Jan 31 03:33:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:33:45 np0005603663 podman[245905]: 2026-01-31 08:33:45.134678279 +0000 UTC m=+0.236318921 container init 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:33:45 np0005603663 podman[245905]: 2026-01-31 08:33:45.142616903 +0000 UTC m=+0.244257525 container start 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:33:45 np0005603663 sharp_dewdney[245922]: 167 167
Jan 31 03:33:45 np0005603663 systemd[1]: libpod-947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54.scope: Deactivated successfully.
Jan 31 03:33:45 np0005603663 podman[245905]: 2026-01-31 08:33:45.170236044 +0000 UTC m=+0.271876656 container attach 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:33:45 np0005603663 podman[245905]: 2026-01-31 08:33:45.171215091 +0000 UTC m=+0.272855703 container died 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:33:45 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0ba6451af2729b3d56a0e865b6046d2d272914b58967e6be4c46f5f25e54c468-merged.mount: Deactivated successfully.
Jan 31 03:33:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:45 np0005603663 podman[245905]: 2026-01-31 08:33:45.539920373 +0000 UTC m=+0.641560985 container remove 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:33:45 np0005603663 systemd[1]: libpod-conmon-947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54.scope: Deactivated successfully.
Jan 31 03:33:45 np0005603663 podman[245946]: 2026-01-31 08:33:45.740526523 +0000 UTC m=+0.111203764 container create 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:33:45 np0005603663 podman[245946]: 2026-01-31 08:33:45.648198663 +0000 UTC m=+0.018875944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:33:45 np0005603663 systemd[1]: Started libpod-conmon-87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b.scope.
Jan 31 03:33:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:33:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:46 np0005603663 podman[245946]: 2026-01-31 08:33:46.018409827 +0000 UTC m=+0.389087148 container init 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:33:46 np0005603663 podman[245946]: 2026-01-31 08:33:46.026851266 +0000 UTC m=+0.397528557 container start 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:33:46 np0005603663 podman[245946]: 2026-01-31 08:33:46.137912005 +0000 UTC m=+0.508589366 container attach 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:33:46 np0005603663 nice_babbage[245963]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:33:46 np0005603663 nice_babbage[245963]: --> All data devices are unavailable
Jan 31 03:33:46 np0005603663 systemd[1]: libpod-87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b.scope: Deactivated successfully.
Jan 31 03:33:46 np0005603663 podman[245946]: 2026-01-31 08:33:46.451885489 +0000 UTC m=+0.822562760 container died 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:47 np0005603663 systemd[1]: var-lib-containers-storage-overlay-cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22-merged.mount: Deactivated successfully.
Jan 31 03:33:47 np0005603663 podman[245946]: 2026-01-31 08:33:47.270339341 +0000 UTC m=+1.641016592 container remove 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:33:47 np0005603663 systemd[1]: libpod-conmon-87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b.scope: Deactivated successfully.
Jan 31 03:33:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:47 np0005603663 podman[246056]: 2026-01-31 08:33:47.669610336 +0000 UTC m=+0.021507958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:33:47 np0005603663 podman[246056]: 2026-01-31 08:33:47.802444612 +0000 UTC m=+0.154342154 container create 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:33:47 np0005603663 systemd[1]: Started libpod-conmon-1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109.scope.
Jan 31 03:33:47 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:33:47 np0005603663 podman[246056]: 2026-01-31 08:33:47.914718465 +0000 UTC m=+0.266616037 container init 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:33:47 np0005603663 podman[246056]: 2026-01-31 08:33:47.921318651 +0000 UTC m=+0.273216243 container start 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:33:47 np0005603663 heuristic_sutherland[246072]: 167 167
Jan 31 03:33:47 np0005603663 systemd[1]: libpod-1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109.scope: Deactivated successfully.
Jan 31 03:33:47 np0005603663 podman[246056]: 2026-01-31 08:33:47.958141462 +0000 UTC m=+0.310039034 container attach 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:33:47 np0005603663 podman[246056]: 2026-01-31 08:33:47.95878762 +0000 UTC m=+0.310685192 container died 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 03:33:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-1bd436a7a5c79bbdd1890b7a9c775299f1258399a44ed3a376bcb20c61963976-merged.mount: Deactivated successfully.
Jan 31 03:33:48 np0005603663 podman[246056]: 2026-01-31 08:33:48.298463511 +0000 UTC m=+0.650361053 container remove 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:48 np0005603663 systemd[1]: libpod-conmon-1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109.scope: Deactivated successfully.
Jan 31 03:33:48 np0005603663 podman[246097]: 2026-01-31 08:33:48.428944559 +0000 UTC m=+0.041832863 container create aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:33:48 np0005603663 podman[246097]: 2026-01-31 08:33:48.409790658 +0000 UTC m=+0.022678992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:33:48 np0005603663 systemd[1]: Started libpod-conmon-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope.
Jan 31 03:33:48 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:33:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:48 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:48 np0005603663 podman[246097]: 2026-01-31 08:33:48.57085769 +0000 UTC m=+0.183746014 container init aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:33:48 np0005603663 podman[246097]: 2026-01-31 08:33:48.578577259 +0000 UTC m=+0.191465563 container start aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:33:48 np0005603663 podman[246097]: 2026-01-31 08:33:48.780973739 +0000 UTC m=+0.393862073 container attach aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]: {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:    "0": [
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:        {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "devices": [
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "/dev/loop3"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            ],
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_name": "ceph_lv0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_size": "21470642176",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "name": "ceph_lv0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "tags": {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cluster_name": "ceph",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.crush_device_class": "",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.encrypted": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.objectstore": "bluestore",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osd_id": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.type": "block",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.vdo": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.with_tpm": "0"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            },
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "type": "block",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "vg_name": "ceph_vg0"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:        }
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:    ],
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:    "1": [
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:        {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "devices": [
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "/dev/loop4"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            ],
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_name": "ceph_lv1",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_size": "21470642176",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "name": "ceph_lv1",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "tags": {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cluster_name": "ceph",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.crush_device_class": "",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.encrypted": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.objectstore": "bluestore",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osd_id": "1",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.type": "block",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.vdo": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.with_tpm": "0"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            },
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "type": "block",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "vg_name": "ceph_vg1"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:        }
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:    ],
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:    "2": [
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:        {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "devices": [
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "/dev/loop5"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            ],
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_name": "ceph_lv2",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_size": "21470642176",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "name": "ceph_lv2",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "tags": {
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.cluster_name": "ceph",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.crush_device_class": "",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.encrypted": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.objectstore": "bluestore",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osd_id": "2",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.type": "block",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.vdo": "0",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:                "ceph.with_tpm": "0"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            },
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "type": "block",
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:            "vg_name": "ceph_vg2"
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:        }
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]:    ]
Jan 31 03:33:48 np0005603663 sleepy_bell[246114]: }
Jan 31 03:33:48 np0005603663 systemd[1]: libpod-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope: Deactivated successfully.
Jan 31 03:33:48 np0005603663 conmon[246114]: conmon aead3223c04a6e162889 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope/container/memory.events
Jan 31 03:33:48 np0005603663 podman[246097]: 2026-01-31 08:33:48.872547478 +0000 UTC m=+0.485435782 container died aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:33:49 np0005603663 systemd[1]: var-lib-containers-storage-overlay-99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed-merged.mount: Deactivated successfully.
Jan 31 03:33:49 np0005603663 podman[246097]: 2026-01-31 08:33:49.111460121 +0000 UTC m=+0.724348425 container remove aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:33:49 np0005603663 systemd[1]: libpod-conmon-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope: Deactivated successfully.
Jan 31 03:33:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.587910287 +0000 UTC m=+0.037097709 container create 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:33:49 np0005603663 systemd[1]: Started libpod-conmon-965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae.scope.
Jan 31 03:33:49 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.571908745 +0000 UTC m=+0.021096197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.669168154 +0000 UTC m=+0.118355646 container init 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.674477494 +0000 UTC m=+0.123664916 container start 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:33:49 np0005603663 vigorous_cartwright[246214]: 167 167
Jan 31 03:33:49 np0005603663 systemd[1]: libpod-965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae.scope: Deactivated successfully.
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.682169112 +0000 UTC m=+0.131356614 container attach 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.682690866 +0000 UTC m=+0.131878328 container died 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:33:49 np0005603663 systemd[1]: var-lib-containers-storage-overlay-469d95fe89fab4fbb1f2831f1a9240a8acecdab0d02ff317416c673c203b0f06-merged.mount: Deactivated successfully.
Jan 31 03:33:49 np0005603663 podman[246198]: 2026-01-31 08:33:49.716342217 +0000 UTC m=+0.165529629 container remove 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:33:49 np0005603663 systemd[1]: libpod-conmon-965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae.scope: Deactivated successfully.
Jan 31 03:33:49 np0005603663 podman[246238]: 2026-01-31 08:33:49.84804066 +0000 UTC m=+0.037811070 container create 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:33:49 np0005603663 systemd[1]: Started libpod-conmon-3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32.scope.
Jan 31 03:33:49 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:33:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:49 np0005603663 podman[246238]: 2026-01-31 08:33:49.916437853 +0000 UTC m=+0.106208283 container init 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:33:49 np0005603663 podman[246238]: 2026-01-31 08:33:49.921933559 +0000 UTC m=+0.111703969 container start 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:49 np0005603663 podman[246238]: 2026-01-31 08:33:49.92766117 +0000 UTC m=+0.117431600 container attach 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:33:49 np0005603663 podman[246238]: 2026-01-31 08:33:49.832929143 +0000 UTC m=+0.022699583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:33:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:50 np0005603663 lvm[246330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:33:50 np0005603663 lvm[246330]: VG ceph_vg0 finished
Jan 31 03:33:50 np0005603663 lvm[246333]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:33:50 np0005603663 lvm[246333]: VG ceph_vg1 finished
Jan 31 03:33:50 np0005603663 lvm[246335]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:33:50 np0005603663 lvm[246335]: VG ceph_vg2 finished
Jan 31 03:33:50 np0005603663 affectionate_sammet[246254]: {}
Jan 31 03:33:50 np0005603663 systemd[1]: libpod-3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32.scope: Deactivated successfully.
Jan 31 03:33:50 np0005603663 podman[246238]: 2026-01-31 08:33:50.640163828 +0000 UTC m=+0.829934278 container died 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:33:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay-50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f-merged.mount: Deactivated successfully.
Jan 31 03:33:50 np0005603663 podman[246238]: 2026-01-31 08:33:50.696576993 +0000 UTC m=+0.886347413 container remove 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:50 np0005603663 systemd[1]: libpod-conmon-3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32.scope: Deactivated successfully.
Jan 31 03:33:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:33:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:33:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:33:50 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:33:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:33:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:33:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:33:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:07 np0005603663 nova_compute[238824]: 2026-01-31 08:34:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:07 np0005603663 nova_compute[238824]: 2026-01-31 08:34:07.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:34:07 np0005603663 nova_compute[238824]: 2026-01-31 08:34:07.361 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:34:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:08 np0005603663 nova_compute[238824]: 2026-01-31 08:34:08.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:09 np0005603663 podman[246377]: 2026-01-31 08:34:09.198975646 +0000 UTC m=+0.071295896 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:34:09 np0005603663 podman[246376]: 2026-01-31 08:34:09.224007764 +0000 UTC m=+0.097429855 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:34:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:11 np0005603663 nova_compute[238824]: 2026-01-31 08:34:11.398 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:14 np0005603663 nova_compute[238824]: 2026-01-31 08:34:14.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:14 np0005603663 nova_compute[238824]: 2026-01-31 08:34:14.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:34:15 np0005603663 nova_compute[238824]: 2026-01-31 08:34:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:15 np0005603663 nova_compute[238824]: 2026-01-31 08:34:15.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.397 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:34:16 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081614150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:34:16 np0005603663 nova_compute[238824]: 2026-01-31 08:34:16.893 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.043 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.043 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.044 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.044 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.193 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.194 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.214 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:34:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219244939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.741 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.747 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.795 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.798 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:34:17 np0005603663 nova_compute[238824]: 2026-01-31 08:34:17.798 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:34:17.893 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:34:17.894 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:34:17.894 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:34:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595877313' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:34:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:34:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595877313' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:34:18 np0005603663 nova_compute[238824]: 2026-01-31 08:34:18.793 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:18 np0005603663 nova_compute[238824]: 2026-01-31 08:34:18.794 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:18 np0005603663 nova_compute[238824]: 2026-01-31 08:34:18.794 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:34:18 np0005603663 nova_compute[238824]: 2026-01-31 08:34:18.794 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:34:18 np0005603663 nova_compute[238824]: 2026-01-31 08:34:18.830 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:34:19 np0005603663 nova_compute[238824]: 2026-01-31 08:34:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:19 np0005603663 nova_compute[238824]: 2026-01-31 08:34:19.371 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:21 np0005603663 nova_compute[238824]: 2026-01-31 08:34:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:21 np0005603663 nova_compute[238824]: 2026-01-31 08:34:21.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:34:21 np0005603663 nova_compute[238824]: 2026-01-31 08:34:21.406 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:22 np0005603663 nova_compute[238824]: 2026-01-31 08:34:22.372 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.368990) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470369022, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1201, "num_deletes": 255, "total_data_size": 1862466, "memory_usage": 1888680, "flush_reason": "Manual Compaction"}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470377456, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1834736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18803, "largest_seqno": 20003, "table_properties": {"data_size": 1829030, "index_size": 3101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11484, "raw_average_key_size": 18, "raw_value_size": 1817574, "raw_average_value_size": 2965, "num_data_blocks": 142, "num_entries": 613, "num_filter_entries": 613, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848346, "oldest_key_time": 1769848346, "file_creation_time": 1769848470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8512 microseconds, and 3218 cpu microseconds.
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.377501) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1834736 bytes OK
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.377519) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379418) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379436) EVENT_LOG_v1 {"time_micros": 1769848470379431, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1856996, prev total WAL file size 1856996, number of live WAL files 2.
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1791KB)], [44(6335KB)]
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470379900, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8322259, "oldest_snapshot_seqno": -1}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4352 keys, 8196605 bytes, temperature: kUnknown
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470430443, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8196605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8165772, "index_size": 18883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107386, "raw_average_key_size": 24, "raw_value_size": 8085224, "raw_average_value_size": 1857, "num_data_blocks": 794, "num_entries": 4352, "num_filter_entries": 4352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.430797) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8196605 bytes
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.432148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 161.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.5) OK, records in: 4874, records dropped: 522 output_compression: NoCompression
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.432169) EVENT_LOG_v1 {"time_micros": 1769848470432157, "job": 22, "event": "compaction_finished", "compaction_time_micros": 50658, "compaction_time_cpu_micros": 12392, "output_level": 6, "num_output_files": 1, "total_output_size": 8196605, "num_input_records": 4874, "num_output_records": 4352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470432671, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470433633, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:34:31
Jan 31 03:34:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:34:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:34:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Jan 31 03:34:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:34:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:34:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:40 np0005603663 podman[246466]: 2026-01-31 08:34:40.147869156 +0000 UTC m=+0.041893235 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 03:34:40 np0005603663 podman[246465]: 2026-01-31 08:34:40.176937518 +0000 UTC m=+0.067925381 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:34:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:34:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:34:51 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:52.007620932 +0000 UTC m=+0.066652955 container create dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:51.966087648 +0000 UTC m=+0.025119721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:34:52 np0005603663 systemd[1]: Started libpod-conmon-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope.
Jan 31 03:34:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:52.137574015 +0000 UTC m=+0.196606128 container init dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:52.146758974 +0000 UTC m=+0.205791037 container start dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:52.150579953 +0000 UTC m=+0.209612066 container attach dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:34:52 np0005603663 systemd[1]: libpod-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope: Deactivated successfully.
Jan 31 03:34:52 np0005603663 nifty_diffie[246670]: 167 167
Jan 31 03:34:52 np0005603663 conmon[246670]: conmon dce65ba00438c0b2c3cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope/container/memory.events
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:52.155467211 +0000 UTC m=+0.214499234 container died dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:34:52 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7c6d3d713638cc28ccdb3fc1b796a5c222b14e1c88e4cb9dcd6054368c47fb35-merged.mount: Deactivated successfully.
Jan 31 03:34:52 np0005603663 podman[246653]: 2026-01-31 08:34:52.252907545 +0000 UTC m=+0.311939618 container remove dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:34:52 np0005603663 systemd[1]: libpod-conmon-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope: Deactivated successfully.
Jan 31 03:34:52 np0005603663 podman[246693]: 2026-01-31 08:34:52.397710758 +0000 UTC m=+0.024341669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:34:52 np0005603663 podman[246693]: 2026-01-31 08:34:52.603699 +0000 UTC m=+0.230329931 container create 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:34:52 np0005603663 systemd[1]: Started libpod-conmon-5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64.scope.
Jan 31 03:34:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:34:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:53 np0005603663 podman[246693]: 2026-01-31 08:34:53.214715591 +0000 UTC m=+0.841346562 container init 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:34:53 np0005603663 podman[246693]: 2026-01-31 08:34:53.221158883 +0000 UTC m=+0.847789834 container start 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:34:53 np0005603663 podman[246693]: 2026-01-31 08:34:53.378996034 +0000 UTC m=+1.005626985 container attach 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:34:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:53 np0005603663 blissful_mahavira[246709]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:34:53 np0005603663 blissful_mahavira[246709]: --> All data devices are unavailable
Jan 31 03:34:53 np0005603663 systemd[1]: libpod-5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64.scope: Deactivated successfully.
Jan 31 03:34:53 np0005603663 podman[246693]: 2026-01-31 08:34:53.701050007 +0000 UTC m=+1.327680978 container died 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:34:54 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1-merged.mount: Deactivated successfully.
Jan 31 03:34:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:55 np0005603663 podman[246693]: 2026-01-31 08:34:55.399654327 +0000 UTC m=+3.026285238 container remove 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:34:55 np0005603663 systemd[1]: libpod-conmon-5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64.scope: Deactivated successfully.
Jan 31 03:34:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:55 np0005603663 podman[246806]: 2026-01-31 08:34:55.839113898 +0000 UTC m=+0.034007652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:34:55 np0005603663 podman[246806]: 2026-01-31 08:34:55.978995253 +0000 UTC m=+0.173888967 container create 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 03:34:56 np0005603663 systemd[1]: Started libpod-conmon-0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d.scope.
Jan 31 03:34:56 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:34:56 np0005603663 podman[246806]: 2026-01-31 08:34:56.432187121 +0000 UTC m=+0.627080885 container init 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:34:56 np0005603663 podman[246806]: 2026-01-31 08:34:56.438387276 +0000 UTC m=+0.633280990 container start 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:34:56 np0005603663 suspicious_hermann[246823]: 167 167
Jan 31 03:34:56 np0005603663 systemd[1]: libpod-0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d.scope: Deactivated successfully.
Jan 31 03:34:56 np0005603663 podman[246806]: 2026-01-31 08:34:56.571032036 +0000 UTC m=+0.765925730 container attach 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:34:56 np0005603663 podman[246806]: 2026-01-31 08:34:56.572046594 +0000 UTC m=+0.766940308 container died 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:34:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e955b171f70d6b5db1cc7a0602e241f432c241ab4a262e99eff0ddc0d488f18c-merged.mount: Deactivated successfully.
Jan 31 03:34:57 np0005603663 podman[246806]: 2026-01-31 08:34:57.42030484 +0000 UTC m=+1.615198554 container remove 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:34:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:57 np0005603663 systemd[1]: libpod-conmon-0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d.scope: Deactivated successfully.
Jan 31 03:34:57 np0005603663 podman[246847]: 2026-01-31 08:34:57.595980626 +0000 UTC m=+0.036282607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:34:57 np0005603663 podman[246847]: 2026-01-31 08:34:57.705747138 +0000 UTC m=+0.146049099 container create 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:34:57 np0005603663 systemd[1]: Started libpod-conmon-28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e.scope.
Jan 31 03:34:57 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:34:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:57 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:57 np0005603663 podman[246847]: 2026-01-31 08:34:57.936442309 +0000 UTC m=+0.376744260 container init 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:34:57 np0005603663 podman[246847]: 2026-01-31 08:34:57.941986386 +0000 UTC m=+0.382288317 container start 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:34:57 np0005603663 podman[246847]: 2026-01-31 08:34:57.998659137 +0000 UTC m=+0.438961068 container attach 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:34:58 np0005603663 modest_noyce[246862]: {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:    "0": [
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:        {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "devices": [
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "/dev/loop3"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            ],
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_name": "ceph_lv0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_size": "21470642176",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "name": "ceph_lv0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "tags": {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cluster_name": "ceph",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.crush_device_class": "",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.encrypted": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.objectstore": "bluestore",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osd_id": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.type": "block",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.vdo": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.with_tpm": "0"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            },
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "type": "block",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "vg_name": "ceph_vg0"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:        }
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:    ],
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:    "1": [
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:        {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "devices": [
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "/dev/loop4"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            ],
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_name": "ceph_lv1",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_size": "21470642176",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "name": "ceph_lv1",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "tags": {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cluster_name": "ceph",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.crush_device_class": "",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.encrypted": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.objectstore": "bluestore",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osd_id": "1",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.type": "block",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.vdo": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.with_tpm": "0"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            },
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "type": "block",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "vg_name": "ceph_vg1"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:        }
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:    ],
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:    "2": [
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:        {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "devices": [
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "/dev/loop5"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            ],
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_name": "ceph_lv2",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_size": "21470642176",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "name": "ceph_lv2",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "tags": {
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.cluster_name": "ceph",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.crush_device_class": "",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.encrypted": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.objectstore": "bluestore",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osd_id": "2",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.type": "block",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.vdo": "0",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:                "ceph.with_tpm": "0"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            },
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "type": "block",
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:            "vg_name": "ceph_vg2"
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:        }
Jan 31 03:34:58 np0005603663 modest_noyce[246862]:    ]
Jan 31 03:34:58 np0005603663 modest_noyce[246862]: }
Jan 31 03:34:58 np0005603663 systemd[1]: libpod-28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e.scope: Deactivated successfully.
Jan 31 03:34:58 np0005603663 podman[246847]: 2026-01-31 08:34:58.222832363 +0000 UTC m=+0.663134294 container died 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:34:58 np0005603663 systemd[1]: var-lib-containers-storage-overlay-85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3-merged.mount: Deactivated successfully.
Jan 31 03:34:58 np0005603663 podman[246847]: 2026-01-31 08:34:58.767666242 +0000 UTC m=+1.207968173 container remove 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:58 np0005603663 systemd[1]: libpod-conmon-28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e.scope: Deactivated successfully.
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.219276957 +0000 UTC m=+0.039907499 container create 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:34:59 np0005603663 systemd[1]: Started libpod-conmon-5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02.scope.
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.196140843 +0000 UTC m=+0.016771405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:34:59 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.317121363 +0000 UTC m=+0.137751935 container init 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.325384076 +0000 UTC m=+0.146014618 container start 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:34:59 np0005603663 romantic_bassi[246963]: 167 167
Jan 31 03:34:59 np0005603663 systemd[1]: libpod-5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02.scope: Deactivated successfully.
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.33435554 +0000 UTC m=+0.154986102 container attach 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.334821673 +0000 UTC m=+0.155452215 container died 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:34:59 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a848ffd85386a878a61d21ef6e92e5361ad5a32ab9f4f17964d631baf2210a2f-merged.mount: Deactivated successfully.
Jan 31 03:34:59 np0005603663 podman[246947]: 2026-01-31 08:34:59.398514023 +0000 UTC m=+0.219144565 container remove 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:34:59 np0005603663 systemd[1]: libpod-conmon-5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02.scope: Deactivated successfully.
Jan 31 03:34:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:34:59 np0005603663 podman[246987]: 2026-01-31 08:34:59.594509143 +0000 UTC m=+0.093530405 container create ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:34:59 np0005603663 podman[246987]: 2026-01-31 08:34:59.528921119 +0000 UTC m=+0.027942401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:34:59 np0005603663 systemd[1]: Started libpod-conmon-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope.
Jan 31 03:34:59 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:34:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:59 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:59 np0005603663 podman[246987]: 2026-01-31 08:34:59.744678918 +0000 UTC m=+0.243700200 container init ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:34:59 np0005603663 podman[246987]: 2026-01-31 08:34:59.750043549 +0000 UTC m=+0.249064811 container start ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:34:59 np0005603663 podman[246987]: 2026-01-31 08:34:59.766500304 +0000 UTC m=+0.265521586 container attach ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:00 np0005603663 lvm[247085]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:35:00 np0005603663 lvm[247084]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:35:00 np0005603663 lvm[247084]: VG ceph_vg0 finished
Jan 31 03:35:00 np0005603663 lvm[247085]: VG ceph_vg1 finished
Jan 31 03:35:00 np0005603663 lvm[247087]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:35:00 np0005603663 lvm[247087]: VG ceph_vg2 finished
Jan 31 03:35:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:00 np0005603663 tender_hellman[247006]: {}
Jan 31 03:35:00 np0005603663 systemd[1]: libpod-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope: Deactivated successfully.
Jan 31 03:35:00 np0005603663 systemd[1]: libpod-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope: Consumed 1.007s CPU time.
Jan 31 03:35:00 np0005603663 podman[246987]: 2026-01-31 08:35:00.542101187 +0000 UTC m=+1.041122499 container died ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:35:00 np0005603663 systemd[1]: var-lib-containers-storage-overlay-844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab-merged.mount: Deactivated successfully.
Jan 31 03:35:00 np0005603663 podman[246987]: 2026-01-31 08:35:00.647309841 +0000 UTC m=+1.146331113 container remove ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:35:00 np0005603663 systemd[1]: libpod-conmon-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope: Deactivated successfully.
Jan 31 03:35:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:35:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:35:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:35:00 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:35:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:35:01 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:35:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:11 np0005603663 podman[247130]: 2026-01-31 08:35:11.175525347 +0000 UTC m=+0.059382889 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 03:35:11 np0005603663 podman[247129]: 2026-01-31 08:35:11.203768516 +0000 UTC m=+0.092214798 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 03:35:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:12 np0005603663 nova_compute[238824]: 2026-01-31 08:35:12.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:15 np0005603663 nova_compute[238824]: 2026-01-31 08:35:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:15 np0005603663 nova_compute[238824]: 2026-01-31 08:35:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:15 np0005603663 nova_compute[238824]: 2026-01-31 08:35:15.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:35:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:16 np0005603663 nova_compute[238824]: 2026-01-31 08:35:16.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.338 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.363 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.364 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.409 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.409 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.410 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.410 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.410 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:35:17.894 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:35:17.895 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:35:17.895 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:35:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875130539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:35:17 np0005603663 nova_compute[238824]: 2026-01-31 08:35:17.950 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:35:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873376400' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:35:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:35:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873376400' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.072 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.073 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.224 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.224 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.242 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.351 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.352 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.368 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.390 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.411 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:35:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320495469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.972 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:18 np0005603663 nova_compute[238824]: 2026-01-31 08:35:18.976 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:35:19 np0005603663 nova_compute[238824]: 2026-01-31 08:35:19.007 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:35:19 np0005603663 nova_compute[238824]: 2026-01-31 08:35:19.010 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:35:19 np0005603663 nova_compute[238824]: 2026-01-31 08:35:19.010 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:20 np0005603663 nova_compute[238824]: 2026-01-31 08:35:20.987 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:23 np0005603663 nova_compute[238824]: 2026-01-31 08:35:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:35:31
Jan 31 03:35:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:35:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:35:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'images']
Jan 31 03:35:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:35:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:35:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:42 np0005603663 podman[247218]: 2026-01-31 08:35:42.157088019 +0000 UTC m=+0.051700332 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:35:42 np0005603663 podman[247217]: 2026-01-31 08:35:42.198951423 +0000 UTC m=+0.095497161 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:35:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:35:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:36:01 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.050433927 +0000 UTC m=+0.031471631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.187558293 +0000 UTC m=+0.168596007 container create fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:36:02 np0005603663 systemd[1]: Started libpod-conmon-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope.
Jan 31 03:36:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.347764951 +0000 UTC m=+0.328802665 container init fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.390785047 +0000 UTC m=+0.371822751 container start fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:36:02 np0005603663 systemd[1]: libpod-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope: Deactivated successfully.
Jan 31 03:36:02 np0005603663 nifty_vaughan[247422]: 167 167
Jan 31 03:36:02 np0005603663 conmon[247422]: conmon fc98def84f5d0ab8e4f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope/container/memory.events
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.443229759 +0000 UTC m=+0.424267443 container attach fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.444625328 +0000 UTC m=+0.425663012 container died fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:36:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9bf791c8d83d32beb8a204504814cd42ad7dcea9ce34ab0ce2b504bdd5e56a1d-merged.mount: Deactivated successfully.
Jan 31 03:36:02 np0005603663 podman[247405]: 2026-01-31 08:36:02.502756731 +0000 UTC m=+0.483794425 container remove fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:36:02 np0005603663 systemd[1]: libpod-conmon-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope: Deactivated successfully.
Jan 31 03:36:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:36:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:36:02 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:36:02 np0005603663 podman[247448]: 2026-01-31 08:36:02.656852576 +0000 UTC m=+0.042133682 container create b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:02 np0005603663 systemd[1]: Started libpod-conmon-b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793.scope.
Jan 31 03:36:02 np0005603663 podman[247448]: 2026-01-31 08:36:02.638684282 +0000 UTC m=+0.023965418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:36:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:36:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:02 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:02 np0005603663 podman[247448]: 2026-01-31 08:36:02.760187727 +0000 UTC m=+0.145468863 container init b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:02 np0005603663 podman[247448]: 2026-01-31 08:36:02.768709387 +0000 UTC m=+0.153990493 container start b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:02 np0005603663 podman[247448]: 2026-01-31 08:36:02.773428291 +0000 UTC m=+0.158709557 container attach b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:36:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:03 np0005603663 admiring_rubin[247464]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:36:03 np0005603663 admiring_rubin[247464]: --> All data devices are unavailable
Jan 31 03:36:03 np0005603663 systemd[1]: libpod-b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793.scope: Deactivated successfully.
Jan 31 03:36:03 np0005603663 podman[247448]: 2026-01-31 08:36:03.211926875 +0000 UTC m=+0.597207981 container died b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:03 np0005603663 systemd[1]: var-lib-containers-storage-overlay-69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3-merged.mount: Deactivated successfully.
Jan 31 03:36:03 np0005603663 podman[247448]: 2026-01-31 08:36:03.2549082 +0000 UTC m=+0.640189306 container remove b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:03 np0005603663 systemd[1]: libpod-conmon-b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793.scope: Deactivated successfully.
Jan 31 03:36:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:03 np0005603663 podman[247558]: 2026-01-31 08:36:03.949113191 +0000 UTC m=+0.040391942 container create 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:03 np0005603663 systemd[1]: Started libpod-conmon-038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b.scope.
Jan 31 03:36:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:36:04 np0005603663 podman[247558]: 2026-01-31 08:36:04.024157022 +0000 UTC m=+0.115435783 container init 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:04 np0005603663 podman[247558]: 2026-01-31 08:36:03.930517646 +0000 UTC m=+0.021796437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:36:04 np0005603663 podman[247558]: 2026-01-31 08:36:04.029585426 +0000 UTC m=+0.120864177 container start 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:04 np0005603663 podman[247558]: 2026-01-31 08:36:04.032473098 +0000 UTC m=+0.123751849 container attach 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:36:04 np0005603663 hungry_tesla[247574]: 167 167
Jan 31 03:36:04 np0005603663 systemd[1]: libpod-038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b.scope: Deactivated successfully.
Jan 31 03:36:04 np0005603663 podman[247558]: 2026-01-31 08:36:04.034371151 +0000 UTC m=+0.125649912 container died 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:04 np0005603663 systemd[1]: var-lib-containers-storage-overlay-1c916189ffc2da3372bc7d2944e1f801516e5d9503fa3456fd83df4402e3ee97-merged.mount: Deactivated successfully.
Jan 31 03:36:04 np0005603663 podman[247558]: 2026-01-31 08:36:04.072445017 +0000 UTC m=+0.163723768 container remove 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:36:04 np0005603663 systemd[1]: libpod-conmon-038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b.scope: Deactivated successfully.
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.210740556 +0000 UTC m=+0.046348941 container create e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 31 03:36:04 np0005603663 systemd[1]: Started libpod-conmon-e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c.scope.
Jan 31 03:36:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:36:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.190963337 +0000 UTC m=+0.026571792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.317468523 +0000 UTC m=+0.153076938 container init e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.325268083 +0000 UTC m=+0.160876458 container start e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.329430391 +0000 UTC m=+0.165038806 container attach e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:36:04 np0005603663 eager_jemison[247614]: {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:    "0": [
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:        {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "devices": [
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "/dev/loop3"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            ],
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_name": "ceph_lv0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_size": "21470642176",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "name": "ceph_lv0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "tags": {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cluster_name": "ceph",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.crush_device_class": "",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.encrypted": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.objectstore": "bluestore",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osd_id": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.type": "block",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.vdo": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.with_tpm": "0"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            },
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "type": "block",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "vg_name": "ceph_vg0"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:        }
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:    ],
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:    "1": [
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:        {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "devices": [
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "/dev/loop4"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            ],
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_name": "ceph_lv1",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_size": "21470642176",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "name": "ceph_lv1",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "tags": {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cluster_name": "ceph",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.crush_device_class": "",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.encrypted": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.objectstore": "bluestore",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osd_id": "1",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.type": "block",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.vdo": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.with_tpm": "0"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            },
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "type": "block",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "vg_name": "ceph_vg1"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:        }
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:    ],
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:    "2": [
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:        {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "devices": [
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "/dev/loop5"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            ],
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_name": "ceph_lv2",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_size": "21470642176",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "name": "ceph_lv2",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "tags": {
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.cluster_name": "ceph",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.crush_device_class": "",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.encrypted": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.objectstore": "bluestore",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osd_id": "2",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.type": "block",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.vdo": "0",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:                "ceph.with_tpm": "0"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            },
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "type": "block",
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:            "vg_name": "ceph_vg2"
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:        }
Jan 31 03:36:04 np0005603663 eager_jemison[247614]:    ]
Jan 31 03:36:04 np0005603663 eager_jemison[247614]: }
Jan 31 03:36:04 np0005603663 systemd[1]: libpod-e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c.scope: Deactivated successfully.
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.602731976 +0000 UTC m=+0.438340361 container died e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:36:04 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28-merged.mount: Deactivated successfully.
Jan 31 03:36:04 np0005603663 podman[247598]: 2026-01-31 08:36:04.727319707 +0000 UTC m=+0.562928132 container remove e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:36:04 np0005603663 systemd[1]: libpod-conmon-e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c.scope: Deactivated successfully.
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.19372881 +0000 UTC m=+0.047007229 container create c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:36:05 np0005603663 systemd[1]: Started libpod-conmon-c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d.scope.
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.172906802 +0000 UTC m=+0.026185311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:36:05 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.286527223 +0000 UTC m=+0.139805722 container init c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.295895198 +0000 UTC m=+0.149173657 container start c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:36:05 np0005603663 gracious_antonelli[247717]: 167 167
Jan 31 03:36:05 np0005603663 systemd[1]: libpod-c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d.scope: Deactivated successfully.
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.301121856 +0000 UTC m=+0.154400275 container attach c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.302245887 +0000 UTC m=+0.155524346 container died c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:36:05 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3f5eda88d5cb2383bf848f048b9f15d1902421db7a442c4e16f5b5617f290b7f-merged.mount: Deactivated successfully.
Jan 31 03:36:05 np0005603663 podman[247701]: 2026-01-31 08:36:05.344460331 +0000 UTC m=+0.197738750 container remove c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:05 np0005603663 systemd[1]: libpod-conmon-c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d.scope: Deactivated successfully.
Jan 31 03:36:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:05 np0005603663 podman[247740]: 2026-01-31 08:36:05.50932354 +0000 UTC m=+0.051011892 container create 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:36:05 np0005603663 systemd[1]: Started libpod-conmon-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope.
Jan 31 03:36:05 np0005603663 podman[247740]: 2026-01-31 08:36:05.479413365 +0000 UTC m=+0.021101727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:36:05 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:36:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:05 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:05 np0005603663 podman[247740]: 2026-01-31 08:36:05.602293218 +0000 UTC m=+0.143981600 container init 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:36:05 np0005603663 podman[247740]: 2026-01-31 08:36:05.609019248 +0000 UTC m=+0.150707590 container start 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:36:05 np0005603663 podman[247740]: 2026-01-31 08:36:05.612953229 +0000 UTC m=+0.154641611 container attach 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:36:06 np0005603663 lvm[247834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:36:06 np0005603663 lvm[247834]: VG ceph_vg0 finished
Jan 31 03:36:06 np0005603663 lvm[247836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:36:06 np0005603663 lvm[247836]: VG ceph_vg1 finished
Jan 31 03:36:06 np0005603663 lvm[247838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:36:06 np0005603663 lvm[247838]: VG ceph_vg2 finished
Jan 31 03:36:06 np0005603663 nifty_pasteur[247756]: {}
Jan 31 03:36:06 np0005603663 systemd[1]: libpod-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope: Deactivated successfully.
Jan 31 03:36:06 np0005603663 systemd[1]: libpod-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope: Consumed 1.230s CPU time.
Jan 31 03:36:06 np0005603663 podman[247740]: 2026-01-31 08:36:06.445030387 +0000 UTC m=+0.986718729 container died 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:36:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773-merged.mount: Deactivated successfully.
Jan 31 03:36:06 np0005603663 podman[247740]: 2026-01-31 08:36:06.495225646 +0000 UTC m=+1.036913978 container remove 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:06 np0005603663 systemd[1]: libpod-conmon-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope: Deactivated successfully.
Jan 31 03:36:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:36:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:36:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:36:06 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:36:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:36:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:36:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:13 np0005603663 podman[247880]: 2026-01-31 08:36:13.180232396 +0000 UTC m=+0.062613401 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:13 np0005603663 podman[247879]: 2026-01-31 08:36:13.217305354 +0000 UTC m=+0.099968176 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:36:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:36:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4627 writes, 20K keys, 4627 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4627 writes, 4627 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1319 writes, 6013 keys, 1319 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s#012Interval WAL: 1319 writes, 1319 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.3      1.50              0.05        11    0.136       0      0       0.0       0.0#012  L6      1/0    7.82 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     34.6     28.7      2.56              0.20        10    0.256     43K   5191       0.0       0.0#012 Sum      1/0    7.82 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     21.8     23.7      4.05              0.25        21    0.193     43K   5191       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.7     14.1     14.3      3.16              0.11        10    0.316     24K   2989       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     34.6     28.7      2.56              0.20        10    0.256     43K   5191       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.3      1.49              0.05        10    0.149       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.09 GB read, 0.05 MB/s read, 4.1 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 3.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 304.00 MB usage: 7.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000113 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(430,6.79 MB,2.23426%) FilterBlock(22,128.86 KB,0.0413945%) IndexBlock(22,242.23 KB,0.0778148%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:36:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:14 np0005603663 nova_compute[238824]: 2026-01-31 08:36:14.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:15 np0005603663 nova_compute[238824]: 2026-01-31 08:36:15.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:16 np0005603663 nova_compute[238824]: 2026-01-31 08:36:16.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.365 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:36:17.895 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:36:17.896 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:36:17.896 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:36:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814207391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:36:17 np0005603663 nova_compute[238824]: 2026-01-31 08:36:17.951 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:36:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077727800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:36:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:36:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077727800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.095 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.097 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.097 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.098 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.167 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.168 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.186 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.681379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578681449, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1095, "num_deletes": 251, "total_data_size": 1642941, "memory_usage": 1673920, "flush_reason": "Manual Compaction"}
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578810861, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1616668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20004, "largest_seqno": 21098, "table_properties": {"data_size": 1611362, "index_size": 2766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11269, "raw_average_key_size": 19, "raw_value_size": 1600751, "raw_average_value_size": 2788, "num_data_blocks": 127, "num_entries": 574, "num_filter_entries": 574, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848470, "oldest_key_time": 1769848470, "file_creation_time": 1769848578, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 129534 microseconds, and 4724 cpu microseconds.
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.810920) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1616668 bytes OK
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.810942) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837228) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837290) EVENT_LOG_v1 {"time_micros": 1769848578837282, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1637857, prev total WAL file size 1637857, number of live WAL files 2.
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.838013) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1578KB)], [47(8004KB)]
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578838045, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9813273, "oldest_snapshot_seqno": -1}
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705001609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.880 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.887 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.907 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.910 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:36:18 np0005603663 nova_compute[238824]: 2026-01-31 08:36:18.910 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:18 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4412 keys, 8023610 bytes, temperature: kUnknown
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578999571, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8023610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7992517, "index_size": 18951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109228, "raw_average_key_size": 24, "raw_value_size": 7911063, "raw_average_value_size": 1793, "num_data_blocks": 793, "num_entries": 4412, "num_filter_entries": 4412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848578, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.999880) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8023610 bytes
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.004950) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.7 rd, 49.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.8 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 4926, records dropped: 514 output_compression: NoCompression
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.004980) EVENT_LOG_v1 {"time_micros": 1769848579004966, "job": 24, "event": "compaction_finished", "compaction_time_micros": 161644, "compaction_time_cpu_micros": 13274, "output_level": 6, "num_output_files": 1, "total_output_size": 8023610, "num_input_records": 4926, "num_output_records": 4412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848579005403, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848579006685, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:36:19 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:36:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:19 np0005603663 nova_compute[238824]: 2026-01-31 08:36:19.911 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:19 np0005603663 nova_compute[238824]: 2026-01-31 08:36:19.912 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:19 np0005603663 nova_compute[238824]: 2026-01-31 08:36:19.912 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:36:19 np0005603663 nova_compute[238824]: 2026-01-31 08:36:19.912 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:36:19 np0005603663 nova_compute[238824]: 2026-01-31 08:36:19.939 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:36:19 np0005603663 nova_compute[238824]: 2026-01-31 08:36:19.940 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:20 np0005603663 nova_compute[238824]: 2026-01-31 08:36:20.361 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:23 np0005603663 nova_compute[238824]: 2026-01-31 08:36:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:36:31
Jan 31 03:36:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:36:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:36:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Jan 31 03:36:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:36:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:36:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:36:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:44 np0005603663 podman[247967]: 2026-01-31 08:36:44.172953325 +0000 UTC m=+0.058554326 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:36:44 np0005603663 podman[247966]: 2026-01-31 08:36:44.189854392 +0000 UTC m=+0.077549903 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:36:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:36:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 KiB/s wr, 0 op/s
Jan 31 03:37:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 03:37:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 03:37:05 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 03:37:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 819 KiB/s wr, 9 op/s
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 03:37:07 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 03:37:07 np0005603663 podman[248221]: 2026-01-31 08:37:07.887677752 +0000 UTC m=+0.044810477 container create 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:37:07 np0005603663 systemd[1]: Started libpod-conmon-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope.
Jan 31 03:37:07 np0005603663 podman[248221]: 2026-01-31 08:37:07.864073245 +0000 UTC m=+0.021205990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:37:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:37:07 np0005603663 podman[248221]: 2026-01-31 08:37:07.990790027 +0000 UTC m=+0.147922762 container init 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:37:08 np0005603663 podman[248221]: 2026-01-31 08:37:08.001776947 +0000 UTC m=+0.158909672 container start 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:37:08 np0005603663 systemd[1]: libpod-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope: Deactivated successfully.
Jan 31 03:37:08 np0005603663 vigilant_tharp[248238]: 167 167
Jan 31 03:37:08 np0005603663 conmon[248238]: conmon 20ae7ff6567c8b7fb63d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope/container/memory.events
Jan 31 03:37:08 np0005603663 podman[248221]: 2026-01-31 08:37:08.010283428 +0000 UTC m=+0.167416253 container attach 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:37:08 np0005603663 podman[248221]: 2026-01-31 08:37:08.011242445 +0000 UTC m=+0.168375170 container died 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:37:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-99f227ccd708901342097eea12842b09094128c09585be5ebe0111cebf94ebf4-merged.mount: Deactivated successfully.
Jan 31 03:37:08 np0005603663 podman[248221]: 2026-01-31 08:37:08.089438655 +0000 UTC m=+0.246571390 container remove 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:37:08 np0005603663 systemd[1]: libpod-conmon-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope: Deactivated successfully.
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.226909031 +0000 UTC m=+0.044992093 container create a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 03:37:08 np0005603663 systemd[1]: Started libpod-conmon-a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084.scope.
Jan 31 03:37:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:37:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:08 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.207492462 +0000 UTC m=+0.025575544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.314525027 +0000 UTC m=+0.132608179 container init a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.321532665 +0000 UTC m=+0.139615747 container start a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.326083964 +0000 UTC m=+0.144167056 container attach a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:37:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:37:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:08 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:37:08 np0005603663 distracted_poincare[248277]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:37:08 np0005603663 distracted_poincare[248277]: --> All data devices are unavailable
Jan 31 03:37:08 np0005603663 systemd[1]: libpod-a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084.scope: Deactivated successfully.
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.793794453 +0000 UTC m=+0.611877535 container died a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:37:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2-merged.mount: Deactivated successfully.
Jan 31 03:37:08 np0005603663 podman[248261]: 2026-01-31 08:37:08.834743351 +0000 UTC m=+0.652826423 container remove a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:08 np0005603663 systemd[1]: libpod-conmon-a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084.scope: Deactivated successfully.
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.269201111 +0000 UTC m=+0.037484121 container create 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:37:09 np0005603663 systemd[1]: Started libpod-conmon-0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762.scope.
Jan 31 03:37:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.340707122 +0000 UTC m=+0.108990182 container init 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.252720605 +0000 UTC m=+0.021003645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.349330476 +0000 UTC m=+0.117613486 container start 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.352281379 +0000 UTC m=+0.120564439 container attach 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:37:09 np0005603663 elegant_robinson[248386]: 167 167
Jan 31 03:37:09 np0005603663 systemd[1]: libpod-0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762.scope: Deactivated successfully.
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.35337226 +0000 UTC m=+0.121655290 container died 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fe27db8f9e9c3597a43129a627d2e03bc8f007ecb8c62dc3028d35cdf3511e27-merged.mount: Deactivated successfully.
Jan 31 03:37:09 np0005603663 podman[248369]: 2026-01-31 08:37:09.390005075 +0000 UTC m=+0.158288085 container remove 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:37:09 np0005603663 systemd[1]: libpod-conmon-0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762.scope: Deactivated successfully.
Jan 31 03:37:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1024 KiB/s wr, 11 op/s
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.538796001 +0000 UTC m=+0.039900789 container create 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:09 np0005603663 systemd[1]: Started libpod-conmon-1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3.scope.
Jan 31 03:37:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:37:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.519314881 +0000 UTC m=+0.020419679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.623813984 +0000 UTC m=+0.124918772 container init 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.62860905 +0000 UTC m=+0.129713838 container start 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.631860832 +0000 UTC m=+0.132965630 container attach 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]: {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:    "0": [
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:        {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "devices": [
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "/dev/loop3"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            ],
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_name": "ceph_lv0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_size": "21470642176",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "name": "ceph_lv0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "tags": {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cluster_name": "ceph",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.crush_device_class": "",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.encrypted": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.objectstore": "bluestore",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osd_id": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.type": "block",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.vdo": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.with_tpm": "0"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            },
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "type": "block",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "vg_name": "ceph_vg0"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:        }
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:    ],
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:    "1": [
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:        {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "devices": [
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "/dev/loop4"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            ],
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_name": "ceph_lv1",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_size": "21470642176",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "name": "ceph_lv1",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "tags": {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cluster_name": "ceph",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.crush_device_class": "",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.encrypted": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.objectstore": "bluestore",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osd_id": "1",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.type": "block",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.vdo": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.with_tpm": "0"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            },
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "type": "block",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "vg_name": "ceph_vg1"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:        }
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:    ],
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:    "2": [
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:        {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "devices": [
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "/dev/loop5"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            ],
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_name": "ceph_lv2",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_size": "21470642176",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "name": "ceph_lv2",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "tags": {
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.cluster_name": "ceph",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.crush_device_class": "",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.encrypted": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.objectstore": "bluestore",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osd_id": "2",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.type": "block",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.vdo": "0",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:                "ceph.with_tpm": "0"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            },
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "type": "block",
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:            "vg_name": "ceph_vg2"
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:        }
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]:    ]
Jan 31 03:37:09 np0005603663 naughty_jepsen[248426]: }
Jan 31 03:37:09 np0005603663 systemd[1]: libpod-1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3.scope: Deactivated successfully.
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.940900137 +0000 UTC m=+0.442004955 container died 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:37:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5-merged.mount: Deactivated successfully.
Jan 31 03:37:09 np0005603663 podman[248410]: 2026-01-31 08:37:09.993071452 +0000 UTC m=+0.494176250 container remove 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:37:10 np0005603663 systemd[1]: libpod-conmon-1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3.scope: Deactivated successfully.
Jan 31 03:37:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.438154212 +0000 UTC m=+0.039490637 container create 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:37:10 np0005603663 systemd[1]: Started libpod-conmon-76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838.scope.
Jan 31 03:37:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.507710758 +0000 UTC m=+0.109047183 container init 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.515749565 +0000 UTC m=+0.117085990 container start 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.421743788 +0000 UTC m=+0.023080233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.519195832 +0000 UTC m=+0.120532277 container attach 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:37:10 np0005603663 sleepy_kalam[248527]: 167 167
Jan 31 03:37:10 np0005603663 systemd[1]: libpod-76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838.scope: Deactivated successfully.
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.520817918 +0000 UTC m=+0.122154343 container died 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 03:37:10 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9e5088b90d55fcd98fa23e81e5c443360e7960bb6a3d0bd60f68dbc3fe8b0536-merged.mount: Deactivated successfully.
Jan 31 03:37:10 np0005603663 podman[248510]: 2026-01-31 08:37:10.561389075 +0000 UTC m=+0.162725520 container remove 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:10 np0005603663 systemd[1]: libpod-conmon-76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838.scope: Deactivated successfully.
Jan 31 03:37:10 np0005603663 podman[248550]: 2026-01-31 08:37:10.715359686 +0000 UTC m=+0.049071548 container create 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:37:10 np0005603663 systemd[1]: Started libpod-conmon-8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b.scope.
Jan 31 03:37:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:37:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:10 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:10 np0005603663 podman[248550]: 2026-01-31 08:37:10.689706981 +0000 UTC m=+0.023418933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:37:10 np0005603663 podman[248550]: 2026-01-31 08:37:10.787601528 +0000 UTC m=+0.121313430 container init 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:37:10 np0005603663 podman[248550]: 2026-01-31 08:37:10.793653919 +0000 UTC m=+0.127365791 container start 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:10 np0005603663 podman[248550]: 2026-01-31 08:37:10.800741349 +0000 UTC m=+0.134453221 container attach 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:11 np0005603663 lvm[248646]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:37:11 np0005603663 lvm[248646]: VG ceph_vg1 finished
Jan 31 03:37:11 np0005603663 lvm[248645]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:37:11 np0005603663 lvm[248645]: VG ceph_vg0 finished
Jan 31 03:37:11 np0005603663 lvm[248648]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:37:11 np0005603663 lvm[248648]: VG ceph_vg2 finished
Jan 31 03:37:11 np0005603663 fervent_ganguly[248567]: {}
Jan 31 03:37:11 np0005603663 systemd[1]: libpod-8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b.scope: Deactivated successfully.
Jan 31 03:37:11 np0005603663 podman[248550]: 2026-01-31 08:37:11.484710561 +0000 UTC m=+0.818422463 container died 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 03:37:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07-merged.mount: Deactivated successfully.
Jan 31 03:37:11 np0005603663 podman[248550]: 2026-01-31 08:37:11.545176871 +0000 UTC m=+0.878888773 container remove 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:37:11 np0005603663 systemd[1]: libpod-conmon-8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b.scope: Deactivated successfully.
Jan 31 03:37:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:37:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:37:11 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:37:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 31 03:37:14 np0005603663 nova_compute[238824]: 2026-01-31 08:37:14.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:15 np0005603663 podman[248690]: 2026-01-31 08:37:15.208636516 +0000 UTC m=+0.087825063 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:37:15 np0005603663 podman[248689]: 2026-01-31 08:37:15.225203595 +0000 UTC m=+0.105563855 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:37:15 np0005603663 nova_compute[238824]: 2026-01-31 08:37:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.4 MiB/s wr, 29 op/s
Jan 31 03:37:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:37:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6186 writes, 25K keys, 6186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6186 writes, 1125 syncs, 5.50 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 363 writes, 834 keys, 363 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 363 writes, 164 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:37:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.3 MiB/s wr, 29 op/s
Jan 31 03:37:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:37:17.897 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:37:17.897 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:37:17.898 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:37:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609810312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:37:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:37:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609810312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:37:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534117675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:37:18 np0005603663 nova_compute[238824]: 2026-01-31 08:37:18.903 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.099 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.101 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.101 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.102 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.181 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.181 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.204 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 24 op/s
Jan 31 03:37:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 31 03:37:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 03:37:19 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 31 03:37:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:37:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37918866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.784 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.788 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.803 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.806 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:37:19 np0005603663 nova_compute[238824]: 2026-01-31 08:37:19.806 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:20 np0005603663 nova_compute[238824]: 2026-01-31 08:37:20.808 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:20 np0005603663 nova_compute[238824]: 2026-01-31 08:37:20.808 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:37:20 np0005603663 nova_compute[238824]: 2026-01-31 08:37:20.809 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:37:20 np0005603663 nova_compute[238824]: 2026-01-31 08:37:20.824 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:37:20 np0005603663 nova_compute[238824]: 2026-01-31 08:37:20.824 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:37:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 7626 writes, 30K keys, 7626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7626 writes, 1597 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 570 writes, 1601 keys, 570 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s#012Interval WAL: 570 writes, 250 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:37:21 np0005603663 nova_compute[238824]: 2026-01-31 08:37:21.350 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 33 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 31 03:37:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 31 03:37:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 03:37:22 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 31 03:37:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 33 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Jan 31 03:37:24 np0005603663 nova_compute[238824]: 2026-01-31 08:37:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 8.5 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Jan 31 03:37:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 31 03:37:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:37:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.8 total, 600.0 interval#012Cumulative writes: 6134 writes, 25K keys, 6134 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6134 writes, 1062 syncs, 5.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 543 writes, 1575 keys, 543 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s#012Interval WAL: 543 writes, 236 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:37:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 31 03:37:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 31 03:37:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 03:37:30 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 31 03:37:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Jan 31 03:37:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:37:31
Jan 31 03:37:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:37:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:37:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images']
Jan 31 03:37:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:37:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.9 KiB/s wr, 30 op/s
Jan 31 03:37:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Jan 31 03:37:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:37:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:46 np0005603663 podman[248781]: 2026-01-31 08:37:46.159848781 +0000 UTC m=+0.053173284 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:37:46 np0005603663 podman[248780]: 2026-01-31 08:37:46.180804344 +0000 UTC m=+0.075771643 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:37:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:37:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:38:12 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:12 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:38:12 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:12 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.083775602 +0000 UTC m=+0.060134277 container create b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:13 np0005603663 systemd[1]: Started libpod-conmon-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope.
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.058792813 +0000 UTC m=+0.035151528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.182681297 +0000 UTC m=+0.159040012 container init b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.192460414 +0000 UTC m=+0.168819089 container start b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.197333902 +0000 UTC m=+0.173692577 container attach b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:13 np0005603663 festive_torvalds[249053]: 167 167
Jan 31 03:38:13 np0005603663 systemd[1]: libpod-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope: Deactivated successfully.
Jan 31 03:38:13 np0005603663 conmon[249053]: conmon b76803e5b9fcc5d10139 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope/container/memory.events
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.201379897 +0000 UTC m=+0.177738572 container died b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:38:13 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fb1ed9233ac16777b4e4d31488084d33c6143aebe9c179c35d88550a643b5de2-merged.mount: Deactivated successfully.
Jan 31 03:38:13 np0005603663 podman[249037]: 2026-01-31 08:38:13.279717919 +0000 UTC m=+0.256076594 container remove b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:13 np0005603663 systemd[1]: libpod-conmon-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope: Deactivated successfully.
Jan 31 03:38:13 np0005603663 podman[249077]: 2026-01-31 08:38:13.455643339 +0000 UTC m=+0.057861942 container create e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:38:13 np0005603663 systemd[1]: Started libpod-conmon-e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579.scope.
Jan 31 03:38:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:13 np0005603663 podman[249077]: 2026-01-31 08:38:13.432720998 +0000 UTC m=+0.034939671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:13 np0005603663 podman[249077]: 2026-01-31 08:38:13.542864222 +0000 UTC m=+0.145082835 container init e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:38:13 np0005603663 podman[249077]: 2026-01-31 08:38:13.548614925 +0000 UTC m=+0.150833538 container start e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 03:38:13 np0005603663 podman[249077]: 2026-01-31 08:38:13.552078684 +0000 UTC m=+0.154297277 container attach e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]: [
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:    {
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "available": false,
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "being_replaced": false,
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "ceph_device_lvm": false,
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "lsm_data": {},
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "lvs": [],
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "path": "/dev/sr0",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "rejected_reasons": [
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "Has a FileSystem",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "Insufficient space (<5GB)"
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        ],
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        "sys_api": {
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "actuators": null,
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "device_nodes": [
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:                "sr0"
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            ],
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "devname": "sr0",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "human_readable_size": "482.00 KB",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "id_bus": "ata",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "model": "QEMU DVD-ROM",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "nr_requests": "2",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "parent": "/dev/sr0",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "partitions": {},
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "path": "/dev/sr0",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "removable": "1",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "rev": "2.5+",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "ro": "0",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "rotational": "1",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "sas_address": "",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "sas_device_handle": "",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "scheduler_mode": "mq-deadline",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "sectors": 0,
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "sectorsize": "2048",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "size": 493568.0,
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "support_discard": "2048",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "type": "disk",
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:            "vendor": "QEMU"
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:        }
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]:    }
Jan 31 03:38:14 np0005603663 reverent_einstein[249094]: ]
Jan 31 03:38:14 np0005603663 systemd[1]: libpod-e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579.scope: Deactivated successfully.
Jan 31 03:38:14 np0005603663 podman[249077]: 2026-01-31 08:38:14.098840491 +0000 UTC m=+0.701059094 container died e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:38:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f-merged.mount: Deactivated successfully.
Jan 31 03:38:14 np0005603663 podman[249077]: 2026-01-31 08:38:14.573303567 +0000 UTC m=+1.175522160 container remove e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:38:14 np0005603663 systemd[1]: libpod-conmon-e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579.scope: Deactivated successfully.
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:14 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.015812408 +0000 UTC m=+0.038994087 container create 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:38:15 np0005603663 systemd[1]: Started libpod-conmon-50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6.scope.
Jan 31 03:38:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.091215515 +0000 UTC m=+0.114397244 container init 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.000222075 +0000 UTC m=+0.023403714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.097905285 +0000 UTC m=+0.121086944 container start 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 03:38:15 np0005603663 gallant_lehmann[249901]: 167 167
Jan 31 03:38:15 np0005603663 systemd[1]: libpod-50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6.scope: Deactivated successfully.
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.101794965 +0000 UTC m=+0.124976654 container attach 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.102998139 +0000 UTC m=+0.126179788 container died 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0a5be5edbf5e0604e085ddc1816f2b32256699a1a8058d57434bc8b2ea0d2014-merged.mount: Deactivated successfully.
Jan 31 03:38:15 np0005603663 podman[249885]: 2026-01-31 08:38:15.136809158 +0000 UTC m=+0.159990797 container remove 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:38:15 np0005603663 systemd[1]: libpod-conmon-50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6.scope: Deactivated successfully.
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.298029371 +0000 UTC m=+0.047097367 container create edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:38:15 np0005603663 systemd[1]: Started libpod-conmon-edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e.scope.
Jan 31 03:38:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.280157454 +0000 UTC m=+0.029225470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.382922739 +0000 UTC m=+0.131990785 container init edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.38861985 +0000 UTC m=+0.137687826 container start edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.392235293 +0000 UTC m=+0.141303359 container attach edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:38:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:15 np0005603663 sleepy_rhodes[249942]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:38:15 np0005603663 sleepy_rhodes[249942]: --> All data devices are unavailable
Jan 31 03:38:15 np0005603663 systemd[1]: libpod-edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e.scope: Deactivated successfully.
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.860996728 +0000 UTC m=+0.610064724 container died edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:38:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17-merged.mount: Deactivated successfully.
Jan 31 03:38:15 np0005603663 podman[249925]: 2026-01-31 08:38:15.946989617 +0000 UTC m=+0.696057603 container remove edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:38:15 np0005603663 systemd[1]: libpod-conmon-edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e.scope: Deactivated successfully.
Jan 31 03:38:16 np0005603663 nova_compute[238824]: 2026-01-31 08:38:16.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:16 np0005603663 nova_compute[238824]: 2026-01-31 08:38:16.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.396921127 +0000 UTC m=+0.049049322 container create c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:38:16 np0005603663 systemd[1]: Started libpod-conmon-c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950.scope.
Jan 31 03:38:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.46293877 +0000 UTC m=+0.115066955 container init c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.37302855 +0000 UTC m=+0.025156725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.472295375 +0000 UTC m=+0.124423580 container start c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:38:16 np0005603663 heuristic_pike[250058]: 167 167
Jan 31 03:38:16 np0005603663 systemd[1]: libpod-c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950.scope: Deactivated successfully.
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.486840958 +0000 UTC m=+0.138969133 container attach c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.487756534 +0000 UTC m=+0.139884729 container died c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e4393ae062be713202300682d0681f2d06d1bc5435a314186d3f35637af85167-merged.mount: Deactivated successfully.
Jan 31 03:38:16 np0005603663 podman[250039]: 2026-01-31 08:38:16.525197606 +0000 UTC m=+0.177325791 container remove c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:38:16 np0005603663 systemd[1]: libpod-conmon-c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950.scope: Deactivated successfully.
Jan 31 03:38:16 np0005603663 podman[250057]: 2026-01-31 08:38:16.534375216 +0000 UTC m=+0.091672401 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:38:16 np0005603663 podman[250053]: 2026-01-31 08:38:16.590978771 +0000 UTC m=+0.146214478 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:16 np0005603663 podman[250124]: 2026-01-31 08:38:16.658805935 +0000 UTC m=+0.043457034 container create 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:38:16 np0005603663 systemd[1]: Started libpod-conmon-485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240.scope.
Jan 31 03:38:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:16 np0005603663 podman[250124]: 2026-01-31 08:38:16.638790147 +0000 UTC m=+0.023441276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:16 np0005603663 podman[250124]: 2026-01-31 08:38:16.752380429 +0000 UTC m=+0.137031588 container init 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:38:16 np0005603663 podman[250124]: 2026-01-31 08:38:16.758657507 +0000 UTC m=+0.143308606 container start 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:38:16 np0005603663 podman[250124]: 2026-01-31 08:38:16.762793714 +0000 UTC m=+0.147444803 container attach 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]: {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:    "0": [
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:        {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "devices": [
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "/dev/loop3"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            ],
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_name": "ceph_lv0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_size": "21470642176",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "name": "ceph_lv0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "tags": {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cluster_name": "ceph",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.crush_device_class": "",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.encrypted": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.objectstore": "bluestore",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osd_id": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.type": "block",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.vdo": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.with_tpm": "0"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            },
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "type": "block",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "vg_name": "ceph_vg0"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:        }
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:    ],
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:    "1": [
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:        {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "devices": [
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "/dev/loop4"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            ],
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_name": "ceph_lv1",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_size": "21470642176",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "name": "ceph_lv1",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "tags": {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cluster_name": "ceph",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.crush_device_class": "",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.encrypted": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.objectstore": "bluestore",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osd_id": "1",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.type": "block",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.vdo": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.with_tpm": "0"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            },
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "type": "block",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "vg_name": "ceph_vg1"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:        }
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:    ],
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:    "2": [
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:        {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "devices": [
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "/dev/loop5"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            ],
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_name": "ceph_lv2",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_size": "21470642176",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "name": "ceph_lv2",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "tags": {
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.cluster_name": "ceph",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.crush_device_class": "",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.encrypted": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.objectstore": "bluestore",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osd_id": "2",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.type": "block",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.vdo": "0",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:                "ceph.with_tpm": "0"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            },
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "type": "block",
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:            "vg_name": "ceph_vg2"
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:        }
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]:    ]
Jan 31 03:38:17 np0005603663 lucid_bohr[250141]: }
Jan 31 03:38:17 np0005603663 systemd[1]: libpod-485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240.scope: Deactivated successfully.
Jan 31 03:38:17 np0005603663 podman[250124]: 2026-01-31 08:38:17.046529021 +0000 UTC m=+0.431180120 container died 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 03:38:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915-merged.mount: Deactivated successfully.
Jan 31 03:38:17 np0005603663 podman[250124]: 2026-01-31 08:38:17.09335784 +0000 UTC m=+0.478008929 container remove 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:38:17 np0005603663 systemd[1]: libpod-conmon-485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240.scope: Deactivated successfully.
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.547245873 +0000 UTC m=+0.034615013 container create c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:38:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:17 np0005603663 systemd[1]: Started libpod-conmon-c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78.scope.
Jan 31 03:38:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.605921677 +0000 UTC m=+0.093290867 container init c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.610792905 +0000 UTC m=+0.098162085 container start c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.615445547 +0000 UTC m=+0.102814727 container attach c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:38:17 np0005603663 compassionate_mestorf[250240]: 167 167
Jan 31 03:38:17 np0005603663 systemd[1]: libpod-c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78.scope: Deactivated successfully.
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.617140185 +0000 UTC m=+0.104509325 container died c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.534662496 +0000 UTC m=+0.022031646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-14a219c3dc351f63417c3ac153a9ac979c3d80c81f9a786ad0e1d6e2eb23506d-merged.mount: Deactivated successfully.
Jan 31 03:38:17 np0005603663 podman[250223]: 2026-01-31 08:38:17.650325926 +0000 UTC m=+0.137695086 container remove c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:38:17 np0005603663 systemd[1]: libpod-conmon-c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78.scope: Deactivated successfully.
Jan 31 03:38:17 np0005603663 podman[250266]: 2026-01-31 08:38:17.814088241 +0000 UTC m=+0.055181866 container create 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:38:17 np0005603663 systemd[1]: Started libpod-conmon-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope.
Jan 31 03:38:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:38:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:17 np0005603663 podman[250266]: 2026-01-31 08:38:17.78620906 +0000 UTC m=+0.027302695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:38:17 np0005603663 podman[250266]: 2026-01-31 08:38:17.899115042 +0000 UTC m=+0.140208677 container init 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:38:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:38:17.897 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:38:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:38:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:17 np0005603663 podman[250266]: 2026-01-31 08:38:17.906038069 +0000 UTC m=+0.147131684 container start 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:38:17 np0005603663 podman[250266]: 2026-01-31 08:38:17.914225121 +0000 UTC m=+0.155318766 container attach 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:38:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:38:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967753749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:38:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:38:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967753749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:38:18 np0005603663 lvm[250361]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:38:18 np0005603663 lvm[250360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:38:18 np0005603663 lvm[250360]: VG ceph_vg0 finished
Jan 31 03:38:18 np0005603663 lvm[250361]: VG ceph_vg1 finished
Jan 31 03:38:18 np0005603663 lvm[250363]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:38:18 np0005603663 lvm[250363]: VG ceph_vg2 finished
Jan 31 03:38:18 np0005603663 suspicious_chaum[250282]: {}
Jan 31 03:38:18 np0005603663 systemd[1]: libpod-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope: Deactivated successfully.
Jan 31 03:38:18 np0005603663 systemd[1]: libpod-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope: Consumed 1.150s CPU time.
Jan 31 03:38:18 np0005603663 podman[250266]: 2026-01-31 08:38:18.731621923 +0000 UTC m=+0.972715568 container died 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:38:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a-merged.mount: Deactivated successfully.
Jan 31 03:38:18 np0005603663 podman[250266]: 2026-01-31 08:38:18.868145615 +0000 UTC m=+1.109239230 container remove 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:38:18 np0005603663 systemd[1]: libpod-conmon-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope: Deactivated successfully.
Jan 31 03:38:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:38:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:38:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:19 np0005603663 nova_compute[238824]: 2026-01-31 08:38:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:19 np0005603663 nova_compute[238824]: 2026-01-31 08:38:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:19 np0005603663 nova_compute[238824]: 2026-01-31 08:38:19.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:19 np0005603663 nova_compute[238824]: 2026-01-31 08:38:19.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:38:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.361 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:38:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486414388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:38:20 np0005603663 nova_compute[238824]: 2026-01-31 08:38:20.907 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.052 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.054 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5075MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.054 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.054 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.116 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.117 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.131 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:38:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742062989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.671 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.676 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.696 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.698 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:38:21 np0005603663 nova_compute[238824]: 2026-01-31 08:38:21.698 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:22 np0005603663 nova_compute[238824]: 2026-01-31 08:38:22.698 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:22 np0005603663 nova_compute[238824]: 2026-01-31 08:38:22.728 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:22 np0005603663 nova_compute[238824]: 2026-01-31 08:38:22.728 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:38:22 np0005603663 nova_compute[238824]: 2026-01-31 08:38:22.728 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:38:22 np0005603663 nova_compute[238824]: 2026-01-31 08:38:22.751 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:38:23 np0005603663 nova_compute[238824]: 2026-01-31 08:38:23.387 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:24 np0005603663 nova_compute[238824]: 2026-01-31 08:38:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Jan 31 03:38:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:38:31
Jan 31 03:38:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:38:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:38:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'default.rgw.control']
Jan 31 03:38:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:38:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:38:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Jan 31 03:38:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 03:38:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 03:38:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 03:38:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:38:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 03:38:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 03:38:47 np0005603663 podman[250448]: 2026-01-31 08:38:47.219007541 +0000 UTC m=+0.111677129 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:38:47 np0005603663 podman[250447]: 2026-01-31 08:38:47.230404883 +0000 UTC m=+0.122381161 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:38:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:38:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:08 np0005603663 nova_compute[238824]: 2026-01-31 08:39:08.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:14 np0005603663 nova_compute[238824]: 2026-01-31 08:39:14.401 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:14 np0005603663 nova_compute[238824]: 2026-01-31 08:39:14.402 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:39:14 np0005603663 nova_compute[238824]: 2026-01-31 08:39:14.423 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:39:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:16 np0005603663 nova_compute[238824]: 2026-01-31 08:39:16.361 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:17 np0005603663 nova_compute[238824]: 2026-01-31 08:39:17.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:39:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:39:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:39:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:39:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2859608869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:39:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:39:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2859608869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:39:18 np0005603663 podman[250494]: 2026-01-31 08:39:18.172706956 +0000 UTC m=+0.061741082 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:39:18 np0005603663 podman[250493]: 2026-01-31 08:39:18.197021615 +0000 UTC m=+0.087047339 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:39:18 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:39:19 np0005603663 nova_compute[238824]: 2026-01-31 08:39:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:19 np0005603663 podman[250633]: 2026-01-31 08:39:19.897926205 +0000 UTC m=+0.437842419 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 03:39:20 np0005603663 podman[250655]: 2026-01-31 08:39:20.114500978 +0000 UTC m=+0.097046924 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:39:20 np0005603663 nova_compute[238824]: 2026-01-31 08:39:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:20 np0005603663 nova_compute[238824]: 2026-01-31 08:39:20.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:39:20 np0005603663 podman[250633]: 2026-01-31 08:39:20.384757353 +0000 UTC m=+0.924673487 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:39:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:21 np0005603663 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:39:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:39:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:39:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.351 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.416 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.416 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.416 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.417 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.417 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:39:22 np0005603663 podman[250983]: 2026-01-31 08:39:22.594566997 +0000 UTC m=+0.023517448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:39:22 np0005603663 podman[250983]: 2026-01-31 08:39:22.69021603 +0000 UTC m=+0.119166451 container create 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:39:22 np0005603663 systemd[1]: Started libpod-conmon-16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd.scope.
Jan 31 03:39:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:39:22 np0005603663 podman[250983]: 2026-01-31 08:39:22.842283443 +0000 UTC m=+0.271233884 container init 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:39:22 np0005603663 podman[250983]: 2026-01-31 08:39:22.850013782 +0000 UTC m=+0.278964193 container start 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:39:22 np0005603663 inspiring_sinoussi[250999]: 167 167
Jan 31 03:39:22 np0005603663 systemd[1]: libpod-16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd.scope: Deactivated successfully.
Jan 31 03:39:22 np0005603663 podman[250983]: 2026-01-31 08:39:22.953764985 +0000 UTC m=+0.382715416 container attach 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:39:22 np0005603663 podman[250983]: 2026-01-31 08:39:22.954619629 +0000 UTC m=+0.383570090 container died 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:39:22 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3963587154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:39:22 np0005603663 nova_compute[238824]: 2026-01-31 08:39:22.995 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.177 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.179 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5071MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.179 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.179 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-90d9d8a69b9c5266cef28cb8e77243d306054e7d4fbf4df620f1d42ea1eab311-merged.mount: Deactivated successfully.
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.242 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.243 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.262 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:23 np0005603663 podman[250983]: 2026-01-31 08:39:23.491531516 +0000 UTC m=+0.920481927 container remove 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:39:23 np0005603663 systemd[1]: libpod-conmon-16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd.scope: Deactivated successfully.
Jan 31 03:39:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:23 np0005603663 podman[251045]: 2026-01-31 08:39:23.609929024 +0000 UTC m=+0.026124502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:39:23 np0005603663 podman[251045]: 2026-01-31 08:39:23.751246262 +0000 UTC m=+0.167441720 container create fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:39:23 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:39:23 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907332852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:39:23 np0005603663 systemd[1]: Started libpod-conmon-fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32.scope.
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.909 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.915 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:39:23 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:39:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:23 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.935 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.937 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:39:23 np0005603663 nova_compute[238824]: 2026-01-31 08:39:23.937 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:23 np0005603663 podman[251045]: 2026-01-31 08:39:23.943115913 +0000 UTC m=+0.359311361 container init fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:39:23 np0005603663 podman[251045]: 2026-01-31 08:39:23.948818845 +0000 UTC m=+0.365014273 container start fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:39:23 np0005603663 podman[251045]: 2026-01-31 08:39:23.957533362 +0000 UTC m=+0.373728820 container attach fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:39:24 np0005603663 nervous_ganguly[251063]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:39:24 np0005603663 nervous_ganguly[251063]: --> All data devices are unavailable
Jan 31 03:39:24 np0005603663 systemd[1]: libpod-fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32.scope: Deactivated successfully.
Jan 31 03:39:24 np0005603663 podman[251045]: 2026-01-31 08:39:24.379921312 +0000 UTC m=+0.796116780 container died fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:39:24 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1-merged.mount: Deactivated successfully.
Jan 31 03:39:24 np0005603663 podman[251045]: 2026-01-31 08:39:24.435410346 +0000 UTC m=+0.851605774 container remove fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:24 np0005603663 systemd[1]: libpod-conmon-fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32.scope: Deactivated successfully.
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.856274022 +0000 UTC m=+0.034536450 container create b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:39:24 np0005603663 systemd[1]: Started libpod-conmon-b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac.scope.
Jan 31 03:39:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.929633583 +0000 UTC m=+0.107896051 container init b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.933903984 +0000 UTC m=+0.112166442 container start b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.937481165 +0000 UTC m=+0.115743623 container attach b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.841976107 +0000 UTC m=+0.020238565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:39:24 np0005603663 unruffled_khayyam[251173]: 167 167
Jan 31 03:39:24 np0005603663 systemd[1]: libpod-b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac.scope: Deactivated successfully.
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.939517203 +0000 UTC m=+0.117779651 container died b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:39:24 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5f2a95b7372c64d771e453eb6d1aa90f97bec0c8f6d5cbac3f020d7c767646be-merged.mount: Deactivated successfully.
Jan 31 03:39:24 np0005603663 podman[251157]: 2026-01-31 08:39:24.97362348 +0000 UTC m=+0.151885918 container remove b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:39:24 np0005603663 systemd[1]: libpod-conmon-b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac.scope: Deactivated successfully.
Jan 31 03:39:25 np0005603663 podman[251196]: 2026-01-31 08:39:25.107187739 +0000 UTC m=+0.039162772 container create 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:39:25 np0005603663 systemd[1]: Started libpod-conmon-9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a.scope.
Jan 31 03:39:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:39:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:25 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:25 np0005603663 podman[251196]: 2026-01-31 08:39:25.173858649 +0000 UTC m=+0.105833682 container init 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:39:25 np0005603663 podman[251196]: 2026-01-31 08:39:25.184230014 +0000 UTC m=+0.116205047 container start 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:39:25 np0005603663 podman[251196]: 2026-01-31 08:39:25.088380645 +0000 UTC m=+0.020355698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:39:25 np0005603663 podman[251196]: 2026-01-31 08:39:25.187533767 +0000 UTC m=+0.119508800 container attach 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]: {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:    "0": [
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:        {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "devices": [
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "/dev/loop3"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            ],
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_name": "ceph_lv0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_size": "21470642176",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "name": "ceph_lv0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "tags": {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cluster_name": "ceph",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.crush_device_class": "",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.encrypted": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.objectstore": "bluestore",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osd_id": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.type": "block",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.vdo": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.with_tpm": "0"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            },
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "type": "block",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "vg_name": "ceph_vg0"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:        }
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:    ],
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:    "1": [
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:        {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "devices": [
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "/dev/loop4"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            ],
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_name": "ceph_lv1",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_size": "21470642176",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "name": "ceph_lv1",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "tags": {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cluster_name": "ceph",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.crush_device_class": "",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.encrypted": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.objectstore": "bluestore",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osd_id": "1",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.type": "block",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.vdo": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.with_tpm": "0"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            },
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "type": "block",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "vg_name": "ceph_vg1"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:        }
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:    ],
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:    "2": [
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:        {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "devices": [
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "/dev/loop5"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            ],
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_name": "ceph_lv2",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_size": "21470642176",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "name": "ceph_lv2",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "tags": {
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.cluster_name": "ceph",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.crush_device_class": "",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.encrypted": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.objectstore": "bluestore",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osd_id": "2",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.type": "block",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.vdo": "0",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:                "ceph.with_tpm": "0"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            },
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "type": "block",
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:            "vg_name": "ceph_vg2"
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:        }
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]:    ]
Jan 31 03:39:25 np0005603663 laughing_feistel[251213]: }
Jan 31 03:39:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:25 np0005603663 systemd[1]: libpod-9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a.scope: Deactivated successfully.
Jan 31 03:39:25 np0005603663 podman[251222]: 2026-01-31 08:39:25.494266797 +0000 UTC m=+0.022750606 container died 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:39:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626-merged.mount: Deactivated successfully.
Jan 31 03:39:25 np0005603663 podman[251222]: 2026-01-31 08:39:25.53738971 +0000 UTC m=+0.065873509 container remove 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:39:25 np0005603663 systemd[1]: libpod-conmon-9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a.scope: Deactivated successfully.
Jan 31 03:39:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.894442457 +0000 UTC m=+0.029474957 container create d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:39:25 np0005603663 nova_compute[238824]: 2026-01-31 08:39:25.921 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:25 np0005603663 nova_compute[238824]: 2026-01-31 08:39:25.922 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:25 np0005603663 systemd[1]: Started libpod-conmon-d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee.scope.
Jan 31 03:39:25 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.954220022 +0000 UTC m=+0.089252542 container init d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.959039539 +0000 UTC m=+0.094072039 container start d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:39:25 np0005603663 ecstatic_sammet[251312]: 167 167
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.962562779 +0000 UTC m=+0.097595299 container attach d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:39:25 np0005603663 systemd[1]: libpod-d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee.scope: Deactivated successfully.
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.963295949 +0000 UTC m=+0.098328449 container died d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.881349995 +0000 UTC m=+0.016382515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:39:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9297b5ee26d0d4b3926d7328d84a99892c246367d10faa652122a162ebfd99ba-merged.mount: Deactivated successfully.
Jan 31 03:39:25 np0005603663 podman[251296]: 2026-01-31 08:39:25.996148581 +0000 UTC m=+0.131181131 container remove d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:39:26 np0005603663 systemd[1]: libpod-conmon-d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee.scope: Deactivated successfully.
Jan 31 03:39:26 np0005603663 podman[251337]: 2026-01-31 08:39:26.11070885 +0000 UTC m=+0.035156728 container create 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:39:26 np0005603663 systemd[1]: Started libpod-conmon-26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48.scope.
Jan 31 03:39:26 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:39:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:26 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:26 np0005603663 podman[251337]: 2026-01-31 08:39:26.183785593 +0000 UTC m=+0.108233511 container init 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:39:26 np0005603663 podman[251337]: 2026-01-31 08:39:26.189273409 +0000 UTC m=+0.113721297 container start 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:39:26 np0005603663 podman[251337]: 2026-01-31 08:39:26.095685014 +0000 UTC m=+0.020132912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:39:26 np0005603663 podman[251337]: 2026-01-31 08:39:26.192821049 +0000 UTC m=+0.117268947 container attach 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:39:26 np0005603663 lvm[251431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:39:26 np0005603663 lvm[251431]: VG ceph_vg0 finished
Jan 31 03:39:26 np0005603663 lvm[251433]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:39:26 np0005603663 lvm[251433]: VG ceph_vg1 finished
Jan 31 03:39:26 np0005603663 lvm[251435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:39:26 np0005603663 lvm[251435]: VG ceph_vg2 finished
Jan 31 03:39:26 np0005603663 gallant_kare[251354]: {}
Jan 31 03:39:26 np0005603663 systemd[1]: libpod-26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48.scope: Deactivated successfully.
Jan 31 03:39:26 np0005603663 podman[251337]: 2026-01-31 08:39:26.904855423 +0000 UTC m=+0.829303371 container died 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:39:27 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee-merged.mount: Deactivated successfully.
Jan 31 03:39:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:27 np0005603663 podman[251337]: 2026-01-31 08:39:27.609790866 +0000 UTC m=+1.534238774 container remove 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:39:27 np0005603663 systemd[1]: libpod-conmon-26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48.scope: Deactivated successfully.
Jan 31 03:39:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:39:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:39:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:29 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:29 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:39:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:39:31
Jan 31 03:39:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:39:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:39:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr']
Jan 31 03:39:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:39:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:39:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:39:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:49 np0005603663 podman[251477]: 2026-01-31 08:39:49.203141077 +0000 UTC m=+0.090255371 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:39:49 np0005603663 podman[251476]: 2026-01-31 08:39:49.232325794 +0000 UTC m=+0.119412828 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:39:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:39:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.681958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801682034, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2085, "num_deletes": 252, "total_data_size": 3636716, "memory_usage": 3691328, "flush_reason": "Manual Compaction"}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801708577, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3535660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21099, "largest_seqno": 23183, "table_properties": {"data_size": 3526101, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19014, "raw_average_key_size": 20, "raw_value_size": 3506999, "raw_average_value_size": 3703, "num_data_blocks": 276, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848579, "oldest_key_time": 1769848579, "file_creation_time": 1769848801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 26697 microseconds, and 7909 cpu microseconds.
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.708660) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3535660 bytes OK
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.708689) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.711384) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.711404) EVENT_LOG_v1 {"time_micros": 1769848801711399, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.711435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3627953, prev total WAL file size 3627953, number of live WAL files 2.
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.712558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3452KB)], [50(7835KB)]
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801712632, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11559270, "oldest_snapshot_seqno": -1}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4839 keys, 9759553 bytes, temperature: kUnknown
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801769863, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9759553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9724025, "index_size": 22298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 118613, "raw_average_key_size": 24, "raw_value_size": 9633398, "raw_average_value_size": 1990, "num_data_blocks": 937, "num_entries": 4839, "num_filter_entries": 4839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.770138) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9759553 bytes
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.772089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.7 rd, 170.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5359, records dropped: 520 output_compression: NoCompression
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.772109) EVENT_LOG_v1 {"time_micros": 1769848801772099, "job": 26, "event": "compaction_finished", "compaction_time_micros": 57321, "compaction_time_cpu_micros": 16734, "output_level": 6, "num_output_files": 1, "total_output_size": 9759553, "num_input_records": 5359, "num_output_records": 4839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801772722, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801773973, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.712445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:40:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:40:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:40:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:40:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912938853' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:40:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:40:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912938853' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:40:18 np0005603663 nova_compute[238824]: 2026-01-31 08:40:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:19 np0005603663 nova_compute[238824]: 2026-01-31 08:40:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:19 np0005603663 nova_compute[238824]: 2026-01-31 08:40:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:20 np0005603663 podman[251522]: 2026-01-31 08:40:20.201892241 +0000 UTC m=+0.049373021 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:40:20 np0005603663 podman[251521]: 2026-01-31 08:40:20.210135745 +0000 UTC m=+0.065272732 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:40:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:21 np0005603663 nova_compute[238824]: 2026-01-31 08:40:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:21 np0005603663 nova_compute[238824]: 2026-01-31 08:40:21.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:40:21 np0005603663 nova_compute[238824]: 2026-01-31 08:40:21.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:40:21 np0005603663 nova_compute[238824]: 2026-01-31 08:40:21.361 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:40:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:22 np0005603663 nova_compute[238824]: 2026-01-31 08:40:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:22 np0005603663 nova_compute[238824]: 2026-01-31 08:40:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:40:23 np0005603663 nova_compute[238824]: 2026-01-31 08:40:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.371 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.371 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:24 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:40:24 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612032007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:40:24 np0005603663 nova_compute[238824]: 2026-01-31 08:40:24.957 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.076 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.077 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5135MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.298 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.298 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.366 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.443 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.444 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:40:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.458 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.482 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:40:25 np0005603663 nova_compute[238824]: 2026-01-31 08:40:25.503 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:40:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045148885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:40:26 np0005603663 nova_compute[238824]: 2026-01-31 08:40:26.128 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:26 np0005603663 nova_compute[238824]: 2026-01-31 08:40:26.133 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:40:26 np0005603663 nova_compute[238824]: 2026-01-31 08:40:26.150 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:40:26 np0005603663 nova_compute[238824]: 2026-01-31 08:40:26.152 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:40:26 np0005603663 nova_compute[238824]: 2026-01-31 08:40:26.152 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:27 np0005603663 nova_compute[238824]: 2026-01-31 08:40:27.147 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:27 np0005603663 nova_compute[238824]: 2026-01-31 08:40:27.148 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:40:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.059830176 +0000 UTC m=+0.051314817 container create 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:29 np0005603663 systemd[1]: Started libpod-conmon-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope.
Jan 31 03:40:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.031036969 +0000 UTC m=+0.022521660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.17845562 +0000 UTC m=+0.169940311 container init 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.183641267 +0000 UTC m=+0.175125908 container start 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:29 np0005603663 xenodochial_dubinsky[251764]: 167 167
Jan 31 03:40:29 np0005603663 systemd[1]: libpod-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope: Deactivated successfully.
Jan 31 03:40:29 np0005603663 conmon[251764]: conmon 20f7bf795b360e0324db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope/container/memory.events
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.198689324 +0000 UTC m=+0.190174025 container attach 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.199551359 +0000 UTC m=+0.191036000 container died 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:29 np0005603663 systemd[1]: var-lib-containers-storage-overlay-89a40971cf598a282cbbffbafc88874a40fa62431fe6f85233095e42a966ae75-merged.mount: Deactivated successfully.
Jan 31 03:40:29 np0005603663 podman[251748]: 2026-01-31 08:40:29.291158837 +0000 UTC m=+0.282643478 container remove 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:29 np0005603663 systemd[1]: libpod-conmon-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope: Deactivated successfully.
Jan 31 03:40:29 np0005603663 podman[251790]: 2026-01-31 08:40:29.477416489 +0000 UTC m=+0.047081306 container create 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:40:29 np0005603663 systemd[1]: Started libpod-conmon-19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46.scope.
Jan 31 03:40:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:40:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:29 np0005603663 podman[251790]: 2026-01-31 08:40:29.455099896 +0000 UTC m=+0.024764783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:40:29 np0005603663 podman[251790]: 2026-01-31 08:40:29.603728902 +0000 UTC m=+0.173393719 container init 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:40:29 np0005603663 podman[251790]: 2026-01-31 08:40:29.609640829 +0000 UTC m=+0.179305626 container start 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:40:29 np0005603663 podman[251790]: 2026-01-31 08:40:29.619739476 +0000 UTC m=+0.189404293 container attach 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:40:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:30 np0005603663 hardcore_euler[251807]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:40:30 np0005603663 hardcore_euler[251807]: --> All data devices are unavailable
Jan 31 03:40:30 np0005603663 systemd[1]: libpod-19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46.scope: Deactivated successfully.
Jan 31 03:40:30 np0005603663 podman[251790]: 2026-01-31 08:40:30.073041252 +0000 UTC m=+0.642706059 container died 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:40:30 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3-merged.mount: Deactivated successfully.
Jan 31 03:40:30 np0005603663 podman[251790]: 2026-01-31 08:40:30.153371141 +0000 UTC m=+0.723035948 container remove 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:40:30 np0005603663 systemd[1]: libpod-conmon-19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46.scope: Deactivated successfully.
Jan 31 03:40:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:30 np0005603663 podman[251901]: 2026-01-31 08:40:30.560777445 +0000 UTC m=+0.016765706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:40:30 np0005603663 podman[251901]: 2026-01-31 08:40:30.687116969 +0000 UTC m=+0.143105240 container create 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:40:30 np0005603663 systemd[1]: Started libpod-conmon-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope.
Jan 31 03:40:30 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:40:30 np0005603663 podman[251901]: 2026-01-31 08:40:30.952356771 +0000 UTC m=+0.408345042 container init 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:40:30 np0005603663 podman[251901]: 2026-01-31 08:40:30.958128425 +0000 UTC m=+0.414116676 container start 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:30 np0005603663 systemd[1]: libpod-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope: Deactivated successfully.
Jan 31 03:40:30 np0005603663 sleepy_proskuriakova[251917]: 167 167
Jan 31 03:40:30 np0005603663 conmon[251917]: conmon 358a31c4272ebe888bf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope/container/memory.events
Jan 31 03:40:30 np0005603663 podman[251901]: 2026-01-31 08:40:30.96501359 +0000 UTC m=+0.421001851 container attach 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:40:30 np0005603663 podman[251901]: 2026-01-31 08:40:30.96533915 +0000 UTC m=+0.421327381 container died 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:40:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f7aad97b21a2ee24ea5123aed401d5cdffa939e784df76518808d53e19ac77d2-merged.mount: Deactivated successfully.
Jan 31 03:40:31 np0005603663 podman[251901]: 2026-01-31 08:40:31.026463373 +0000 UTC m=+0.482451654 container remove 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:31 np0005603663 systemd[1]: libpod-conmon-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope: Deactivated successfully.
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.156337656 +0000 UTC m=+0.039405249 container create 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:40:31 np0005603663 systemd[1]: Started libpod-conmon-968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad.scope.
Jan 31 03:40:31 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:40:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:31 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.136950946 +0000 UTC m=+0.020018519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.245410982 +0000 UTC m=+0.128478535 container init 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.253382088 +0000 UTC m=+0.136449631 container start 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.258086921 +0000 UTC m=+0.141154494 container attach 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]: {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:    "0": [
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:        {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "devices": [
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "/dev/loop3"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            ],
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_name": "ceph_lv0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_size": "21470642176",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "name": "ceph_lv0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "tags": {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cluster_name": "ceph",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.crush_device_class": "",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.encrypted": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.objectstore": "bluestore",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osd_id": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.type": "block",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.vdo": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.with_tpm": "0"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            },
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "type": "block",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "vg_name": "ceph_vg0"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:        }
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:    ],
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:    "1": [
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:        {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "devices": [
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "/dev/loop4"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            ],
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_name": "ceph_lv1",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_size": "21470642176",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "name": "ceph_lv1",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "tags": {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cluster_name": "ceph",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.crush_device_class": "",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.encrypted": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.objectstore": "bluestore",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osd_id": "1",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.type": "block",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.vdo": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.with_tpm": "0"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            },
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "type": "block",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "vg_name": "ceph_vg1"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:        }
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:    ],
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:    "2": [
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:        {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "devices": [
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "/dev/loop5"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            ],
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_name": "ceph_lv2",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_size": "21470642176",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "name": "ceph_lv2",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "tags": {
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.cluster_name": "ceph",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.crush_device_class": "",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.encrypted": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.objectstore": "bluestore",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osd_id": "2",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.type": "block",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.vdo": "0",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:                "ceph.with_tpm": "0"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            },
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "type": "block",
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:            "vg_name": "ceph_vg2"
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:        }
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]:    ]
Jan 31 03:40:31 np0005603663 reverent_bhaskara[251957]: }
Jan 31 03:40:31 np0005603663 systemd[1]: libpod-968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad.scope: Deactivated successfully.
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.504729617 +0000 UTC m=+0.387797210 container died 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:40:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae-merged.mount: Deactivated successfully.
Jan 31 03:40:31 np0005603663 podman[251941]: 2026-01-31 08:40:31.726709862 +0000 UTC m=+0.609777425 container remove 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:40:31 np0005603663 systemd[1]: libpod-conmon-968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad.scope: Deactivated successfully.
Jan 31 03:40:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:40:31
Jan 31 03:40:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:40:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:40:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'backups', '.mgr', 'images']
Jan 31 03:40:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.175316296 +0000 UTC m=+0.042832836 container create afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:40:32 np0005603663 systemd[1]: Started libpod-conmon-afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de.scope.
Jan 31 03:40:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.24529592 +0000 UTC m=+0.112812490 container init afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.152134468 +0000 UTC m=+0.019651028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.251363182 +0000 UTC m=+0.118879732 container start afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 03:40:32 np0005603663 priceless_galois[252057]: 167 167
Jan 31 03:40:32 np0005603663 systemd[1]: libpod-afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de.scope: Deactivated successfully.
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.258444613 +0000 UTC m=+0.125961193 container attach afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.258811154 +0000 UTC m=+0.126327704 container died afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0ce0a02109c09f4b3fdc09f3ca3eb079794c3bfcb46e4869afd123768a452d4c-merged.mount: Deactivated successfully.
Jan 31 03:40:32 np0005603663 podman[252041]: 2026-01-31 08:40:32.324522857 +0000 UTC m=+0.192039407 container remove afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:32 np0005603663 systemd[1]: libpod-conmon-afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de.scope: Deactivated successfully.
Jan 31 03:40:32 np0005603663 podman[252080]: 2026-01-31 08:40:32.415867138 +0000 UTC m=+0.017297951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:40:32 np0005603663 podman[252080]: 2026-01-31 08:40:32.600625238 +0000 UTC m=+0.202056031 container create a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:40:32 np0005603663 systemd[1]: Started libpod-conmon-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope.
Jan 31 03:40:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:40:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:32 np0005603663 podman[252080]: 2026-01-31 08:40:32.823643913 +0000 UTC m=+0.425074716 container init a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:40:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:32 np0005603663 podman[252080]: 2026-01-31 08:40:32.83091956 +0000 UTC m=+0.432350353 container start a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:40:32 np0005603663 podman[252080]: 2026-01-31 08:40:32.838663889 +0000 UTC m=+0.440094712 container attach a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:40:33 np0005603663 lvm[252175]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:40:33 np0005603663 lvm[252175]: VG ceph_vg0 finished
Jan 31 03:40:33 np0005603663 lvm[252174]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:40:33 np0005603663 lvm[252174]: VG ceph_vg1 finished
Jan 31 03:40:33 np0005603663 lvm[252177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:40:33 np0005603663 lvm[252177]: VG ceph_vg2 finished
Jan 31 03:40:33 np0005603663 angry_gates[252096]: {}
Jan 31 03:40:33 np0005603663 systemd[1]: libpod-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope: Deactivated successfully.
Jan 31 03:40:33 np0005603663 systemd[1]: libpod-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope: Consumed 1.027s CPU time.
Jan 31 03:40:33 np0005603663 podman[252080]: 2026-01-31 08:40:33.566043099 +0000 UTC m=+1.167473902 container died a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:40:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47-merged.mount: Deactivated successfully.
Jan 31 03:40:33 np0005603663 podman[252080]: 2026-01-31 08:40:33.601789483 +0000 UTC m=+1.203220256 container remove a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:33 np0005603663 systemd[1]: libpod-conmon-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope: Deactivated successfully.
Jan 31 03:40:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:40:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:40:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:40:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:40:34 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:40:34 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:40:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:40:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:51 np0005603663 podman[252217]: 2026-01-31 08:40:51.19198175 +0000 UTC m=+0.080248517 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 31 03:40:51 np0005603663 podman[252218]: 2026-01-31 08:40:51.191763113 +0000 UTC m=+0.080323279 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:40:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:40:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:41:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:41:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:41:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:41:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177264080' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:41:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:41:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177264080' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:41:18 np0005603663 nova_compute[238824]: 2026-01-31 08:41:18.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:20 np0005603663 nova_compute[238824]: 2026-01-31 08:41:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:21 np0005603663 nova_compute[238824]: 2026-01-31 08:41:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:22 np0005603663 podman[252259]: 2026-01-31 08:41:22.147166921 +0000 UTC m=+0.039065059 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:41:22 np0005603663 podman[252258]: 2026-01-31 08:41:22.176874183 +0000 UTC m=+0.070346096 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:41:22 np0005603663 nova_compute[238824]: 2026-01-31 08:41:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:22 np0005603663 nova_compute[238824]: 2026-01-31 08:41:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:41:23 np0005603663 nova_compute[238824]: 2026-01-31 08:41:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:23 np0005603663 nova_compute[238824]: 2026-01-31 08:41:23.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:41:23 np0005603663 nova_compute[238824]: 2026-01-31 08:41:23.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:41:23 np0005603663 nova_compute[238824]: 2026-01-31 08:41:23.429 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:41:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:24 np0005603663 nova_compute[238824]: 2026-01-31 08:41:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:24 np0005603663 nova_compute[238824]: 2026-01-31 08:41:24.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:25 np0005603663 nova_compute[238824]: 2026-01-31 08:41:25.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:28 np0005603663 nova_compute[238824]: 2026-01-31 08:41:28.440 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:28 np0005603663 nova_compute[238824]: 2026-01-31 08:41:28.440 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:28 np0005603663 nova_compute[238824]: 2026-01-31 08:41:28.440 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:28 np0005603663 nova_compute[238824]: 2026-01-31 08:41:28.441 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:41:28 np0005603663 nova_compute[238824]: 2026-01-31 08:41:28.441 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:41:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289803331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:41:28 np0005603663 nova_compute[238824]: 2026-01-31 08:41:28.927 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.075 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.076 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.718 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.719 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:41:29 np0005603663 nova_compute[238824]: 2026-01-31 08:41:29.744 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:41:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756240464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:41:30 np0005603663 nova_compute[238824]: 2026-01-31 08:41:30.518 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:30 np0005603663 nova_compute[238824]: 2026-01-31 08:41:30.523 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:41:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:30 np0005603663 nova_compute[238824]: 2026-01-31 08:41:30.633 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:41:30 np0005603663 nova_compute[238824]: 2026-01-31 08:41:30.635 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:41:30 np0005603663 nova_compute[238824]: 2026-01-31 08:41:30.635 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:31 np0005603663 nova_compute[238824]: 2026-01-31 08:41:31.630 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:41:31
Jan 31 03:41:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:41:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:41:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.mgr', 'images', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 03:41:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:41:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:41:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:41:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:41:34 np0005603663 podman[252490]: 2026-01-31 08:41:34.842974085 +0000 UTC m=+0.019575996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:41:34 np0005603663 podman[252490]: 2026-01-31 08:41:34.995713427 +0000 UTC m=+0.172315318 container create d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:41:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:41:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:41:35 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:41:35 np0005603663 systemd[1]: Started libpod-conmon-d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db.scope.
Jan 31 03:41:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:41:35 np0005603663 podman[252490]: 2026-01-31 08:41:35.347144864 +0000 UTC m=+0.523746815 container init d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:41:35 np0005603663 podman[252490]: 2026-01-31 08:41:35.355916033 +0000 UTC m=+0.532517964 container start d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:41:35 np0005603663 optimistic_shtern[252507]: 167 167
Jan 31 03:41:35 np0005603663 systemd[1]: libpod-d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db.scope: Deactivated successfully.
Jan 31 03:41:35 np0005603663 podman[252490]: 2026-01-31 08:41:35.527823187 +0000 UTC m=+0.704425108 container attach d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:41:35 np0005603663 podman[252490]: 2026-01-31 08:41:35.528189937 +0000 UTC m=+0.704791858 container died d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:41:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:36 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3ba3be8de05bdfc7b639e1009b40d2b182fff5c20e2480bdf7900e7d04696e18-merged.mount: Deactivated successfully.
Jan 31 03:41:36 np0005603663 podman[252490]: 2026-01-31 08:41:36.595835768 +0000 UTC m=+1.772437679 container remove d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:41:36 np0005603663 systemd[1]: libpod-conmon-d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db.scope: Deactivated successfully.
Jan 31 03:41:36 np0005603663 podman[252533]: 2026-01-31 08:41:36.720781622 +0000 UTC m=+0.032442281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:41:36 np0005603663 podman[252533]: 2026-01-31 08:41:36.91711045 +0000 UTC m=+0.228771029 container create 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:41:37 np0005603663 systemd[1]: Started libpod-conmon-878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b.scope.
Jan 31 03:41:37 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:41:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:37 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:37 np0005603663 podman[252533]: 2026-01-31 08:41:37.173379138 +0000 UTC m=+0.485039747 container init 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:41:37 np0005603663 podman[252533]: 2026-01-31 08:41:37.179994816 +0000 UTC m=+0.491655415 container start 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:41:37 np0005603663 podman[252533]: 2026-01-31 08:41:37.297932211 +0000 UTC m=+0.609592850 container attach 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:41:37 np0005603663 kind_carson[252550]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:41:37 np0005603663 kind_carson[252550]: --> All data devices are unavailable
Jan 31 03:41:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:37 np0005603663 systemd[1]: libpod-878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b.scope: Deactivated successfully.
Jan 31 03:41:37 np0005603663 podman[252533]: 2026-01-31 08:41:37.680914253 +0000 UTC m=+0.992574842 container died 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 31 03:41:38 np0005603663 systemd[1]: var-lib-containers-storage-overlay-03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0-merged.mount: Deactivated successfully.
Jan 31 03:41:39 np0005603663 podman[252533]: 2026-01-31 08:41:39.185022331 +0000 UTC m=+2.496682900 container remove 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:41:39 np0005603663 systemd[1]: libpod-conmon-878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b.scope: Deactivated successfully.
Jan 31 03:41:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:39 np0005603663 podman[252646]: 2026-01-31 08:41:39.591977333 +0000 UTC m=+0.023204059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:41:39 np0005603663 podman[252646]: 2026-01-31 08:41:39.824849647 +0000 UTC m=+0.256076383 container create 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:41:40 np0005603663 systemd[1]: Started libpod-conmon-400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b.scope.
Jan 31 03:41:40 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:41:40 np0005603663 podman[252646]: 2026-01-31 08:41:40.288532318 +0000 UTC m=+0.719759044 container init 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:41:40 np0005603663 podman[252646]: 2026-01-31 08:41:40.294041334 +0000 UTC m=+0.725268070 container start 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:41:40 np0005603663 practical_buck[252662]: 167 167
Jan 31 03:41:40 np0005603663 systemd[1]: libpod-400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b.scope: Deactivated successfully.
Jan 31 03:41:40 np0005603663 podman[252646]: 2026-01-31 08:41:40.393002981 +0000 UTC m=+0.824229707 container attach 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:41:40 np0005603663 podman[252646]: 2026-01-31 08:41:40.39436777 +0000 UTC m=+0.825594466 container died 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:41:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:40 np0005603663 systemd[1]: var-lib-containers-storage-overlay-939671ef586eccdc19146d93def1b76eb26782aaa5d2fff4475d77cb0305ed04-merged.mount: Deactivated successfully.
Jan 31 03:41:41 np0005603663 podman[252646]: 2026-01-31 08:41:41.152224573 +0000 UTC m=+1.583451279 container remove 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:41:41 np0005603663 systemd[1]: libpod-conmon-400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b.scope: Deactivated successfully.
Jan 31 03:41:41 np0005603663 podman[252687]: 2026-01-31 08:41:41.274918873 +0000 UTC m=+0.027834080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:41:41 np0005603663 podman[252687]: 2026-01-31 08:41:41.541656278 +0000 UTC m=+0.294571385 container create a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:41:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:41 np0005603663 systemd[1]: Started libpod-conmon-a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44.scope.
Jan 31 03:41:41 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:41:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:41 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:41 np0005603663 podman[252687]: 2026-01-31 08:41:41.934085609 +0000 UTC m=+0.687000756 container init a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:41:41 np0005603663 podman[252687]: 2026-01-31 08:41:41.941094748 +0000 UTC m=+0.694009865 container start a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]: {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:    "0": [
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:        {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "devices": [
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "/dev/loop3"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            ],
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_name": "ceph_lv0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_size": "21470642176",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "name": "ceph_lv0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "tags": {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cluster_name": "ceph",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.crush_device_class": "",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.encrypted": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.objectstore": "bluestore",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osd_id": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.type": "block",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.vdo": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.with_tpm": "0"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            },
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "type": "block",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "vg_name": "ceph_vg0"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:        }
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:    ],
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:    "1": [
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:        {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "devices": [
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "/dev/loop4"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            ],
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_name": "ceph_lv1",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_size": "21470642176",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "name": "ceph_lv1",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "tags": {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cluster_name": "ceph",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.crush_device_class": "",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.encrypted": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.objectstore": "bluestore",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osd_id": "1",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.type": "block",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.vdo": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.with_tpm": "0"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            },
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "type": "block",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "vg_name": "ceph_vg1"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:        }
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:    ],
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:    "2": [
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:        {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "devices": [
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "/dev/loop5"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            ],
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_name": "ceph_lv2",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_size": "21470642176",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "name": "ceph_lv2",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "tags": {
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.cluster_name": "ceph",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.crush_device_class": "",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.encrypted": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.objectstore": "bluestore",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osd_id": "2",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.type": "block",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.vdo": "0",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:                "ceph.with_tpm": "0"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            },
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "type": "block",
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:            "vg_name": "ceph_vg2"
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:        }
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]:    ]
Jan 31 03:41:42 np0005603663 brave_torvalds[252704]: }
Jan 31 03:41:42 np0005603663 systemd[1]: libpod-a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44.scope: Deactivated successfully.
Jan 31 03:41:42 np0005603663 podman[252687]: 2026-01-31 08:41:42.224366402 +0000 UTC m=+0.977281519 container attach a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:41:42 np0005603663 podman[252687]: 2026-01-31 08:41:42.22536018 +0000 UTC m=+0.978275337 container died a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:41:42 np0005603663 systemd[1]: var-lib-containers-storage-overlay-ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07-merged.mount: Deactivated successfully.
Jan 31 03:41:43 np0005603663 podman[252687]: 2026-01-31 08:41:43.247704714 +0000 UTC m=+2.000619831 container remove a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:41:43 np0005603663 systemd[1]: libpod-conmon-a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44.scope: Deactivated successfully.
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:41:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:43 np0005603663 podman[252788]: 2026-01-31 08:41:43.657909829 +0000 UTC m=+0.022398757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:41:44 np0005603663 podman[252788]: 2026-01-31 08:41:44.004365684 +0000 UTC m=+0.368854512 container create fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:41:44 np0005603663 systemd[1]: Started libpod-conmon-fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f.scope.
Jan 31 03:41:44 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:41:44 np0005603663 podman[252788]: 2026-01-31 08:41:44.338200092 +0000 UTC m=+0.702688940 container init fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 03:41:44 np0005603663 podman[252788]: 2026-01-31 08:41:44.343106672 +0000 UTC m=+0.707595500 container start fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:41:44 np0005603663 interesting_herschel[252804]: 167 167
Jan 31 03:41:44 np0005603663 systemd[1]: libpod-fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f.scope: Deactivated successfully.
Jan 31 03:41:44 np0005603663 podman[252788]: 2026-01-31 08:41:44.491027817 +0000 UTC m=+0.855516675 container attach fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:41:44 np0005603663 podman[252788]: 2026-01-31 08:41:44.49149435 +0000 UTC m=+0.855983178 container died fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 31 03:41:44 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b7122c1b68c5b431a6f89a44e9a06857c99fd4e223f9a702ea352031000742af-merged.mount: Deactivated successfully.
Jan 31 03:41:45 np0005603663 podman[252788]: 2026-01-31 08:41:45.106827312 +0000 UTC m=+1.471316170 container remove fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:41:45 np0005603663 systemd[1]: libpod-conmon-fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f.scope: Deactivated successfully.
Jan 31 03:41:45 np0005603663 podman[252829]: 2026-01-31 08:41:45.231244841 +0000 UTC m=+0.018242448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:41:45 np0005603663 podman[252829]: 2026-01-31 08:41:45.412193983 +0000 UTC m=+0.199191600 container create ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:41:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:45 np0005603663 systemd[1]: Started libpod-conmon-ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534.scope.
Jan 31 03:41:45 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:41:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:45 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:45 np0005603663 podman[252829]: 2026-01-31 08:41:45.752572996 +0000 UTC m=+0.539570613 container init ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:41:45 np0005603663 podman[252829]: 2026-01-31 08:41:45.758793093 +0000 UTC m=+0.545790720 container start ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:41:45 np0005603663 podman[252829]: 2026-01-31 08:41:45.857834272 +0000 UTC m=+0.644831859 container attach ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:41:46 np0005603663 lvm[252927]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:41:46 np0005603663 lvm[252924]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:41:46 np0005603663 lvm[252924]: VG ceph_vg0 finished
Jan 31 03:41:46 np0005603663 lvm[252927]: VG ceph_vg1 finished
Jan 31 03:41:46 np0005603663 lvm[252929]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:41:46 np0005603663 lvm[252929]: VG ceph_vg2 finished
Jan 31 03:41:46 np0005603663 sharp_zhukovsky[252846]: {}
Jan 31 03:41:46 np0005603663 systemd[1]: libpod-ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534.scope: Deactivated successfully.
Jan 31 03:41:46 np0005603663 podman[252829]: 2026-01-31 08:41:46.459177716 +0000 UTC m=+1.246175333 container died ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:41:46 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d-merged.mount: Deactivated successfully.
Jan 31 03:41:46 np0005603663 podman[252829]: 2026-01-31 08:41:46.941381713 +0000 UTC m=+1.728379300 container remove ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:41:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:41:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:41:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:41:47 np0005603663 systemd[1]: libpod-conmon-ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534.scope: Deactivated successfully.
Jan 31 03:41:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:41:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:48 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:41:48 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:41:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:53 np0005603663 podman[252970]: 2026-01-31 08:41:53.16550355 +0000 UTC m=+0.054599859 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:41:53 np0005603663 podman[252969]: 2026-01-31 08:41:53.184011935 +0000 UTC m=+0.077064486 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:41:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:41:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:42:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:42:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:42:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:42:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3620524586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:42:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:42:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3620524586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:42:19 np0005603663 nova_compute[238824]: 2026-01-31 08:42:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:20.855369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848940855413, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1598, "num_deletes": 509, "total_data_size": 2087478, "memory_usage": 2130064, "flush_reason": "Manual Compaction"}
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848940940353, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1882506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23184, "largest_seqno": 24781, "table_properties": {"data_size": 1875876, "index_size": 3256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17357, "raw_average_key_size": 19, "raw_value_size": 1860360, "raw_average_value_size": 2044, "num_data_blocks": 147, "num_entries": 910, "num_filter_entries": 910, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848802, "oldest_key_time": 1769848802, "file_creation_time": 1769848940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 85023 microseconds, and 3633 cpu microseconds.
Jan 31 03:42:20 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:20.940395) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1882506 bytes OK
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:20.940412) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.036615) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.036671) EVENT_LOG_v1 {"time_micros": 1769848941036659, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.036701) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2079455, prev total WAL file size 2079455, number of live WAL files 2.
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.037631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1838KB)], [53(9530KB)]
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941037712, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11642059, "oldest_snapshot_seqno": -1}
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4731 keys, 8350433 bytes, temperature: kUnknown
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941149320, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 8350433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8317767, "index_size": 19756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 117939, "raw_average_key_size": 24, "raw_value_size": 8231094, "raw_average_value_size": 1739, "num_data_blocks": 822, "num_entries": 4731, "num_filter_entries": 4731, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.149611) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8350433 bytes
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.179911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.2 rd, 74.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(10.6) write-amplify(4.4) OK, records in: 5749, records dropped: 1018 output_compression: NoCompression
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.179948) EVENT_LOG_v1 {"time_micros": 1769848941179934, "job": 28, "event": "compaction_finished", "compaction_time_micros": 111697, "compaction_time_cpu_micros": 27282, "output_level": 6, "num_output_files": 1, "total_output_size": 8350433, "num_input_records": 5749, "num_output_records": 4731, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941180409, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941181481, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.037532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:21 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:21 np0005603663 nova_compute[238824]: 2026-01-31 08:42:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:22 np0005603663 nova_compute[238824]: 2026-01-31 08:42:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:22 np0005603663 nova_compute[238824]: 2026-01-31 08:42:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:22 np0005603663 nova_compute[238824]: 2026-01-31 08:42:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:42:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:24 np0005603663 podman[253016]: 2026-01-31 08:42:24.16810593 +0000 UTC m=+0.060021149 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:42:24 np0005603663 podman[253015]: 2026-01-31 08:42:24.190237154 +0000 UTC m=+0.085286982 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 31 03:42:25 np0005603663 nova_compute[238824]: 2026-01-31 08:42:25.335 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:25 np0005603663 nova_compute[238824]: 2026-01-31 08:42:25.351 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:25 np0005603663 nova_compute[238824]: 2026-01-31 08:42:25.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:42:25 np0005603663 nova_compute[238824]: 2026-01-31 08:42:25.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:42:25 np0005603663 nova_compute[238824]: 2026-01-31 08:42:25.365 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:42:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.367 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.368 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:42:26 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461642590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:42:26 np0005603663 nova_compute[238824]: 2026-01-31 08:42:26.912 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.102 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.104 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.104 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.104 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.178 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.178 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.196 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:42:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181459331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.713 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.719 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.746 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.748 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:42:27 np0005603663 nova_compute[238824]: 2026-01-31 08:42:27.748 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:28 np0005603663 nova_compute[238824]: 2026-01-31 08:42:28.744 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:42:31
Jan 31 03:42:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:42:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:42:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Jan 31 03:42:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:42:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:42:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:42:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:42:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:42:47 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:42:48 np0005603663 podman[253244]: 2026-01-31 08:42:48.242927464 +0000 UTC m=+0.114781126 container create 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:42:48 np0005603663 podman[253244]: 2026-01-31 08:42:48.148597924 +0000 UTC m=+0.020451606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:42:48 np0005603663 systemd[1]: Started libpod-conmon-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope.
Jan 31 03:42:48 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:42:48 np0005603663 podman[253244]: 2026-01-31 08:42:48.403952573 +0000 UTC m=+0.275806255 container init 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:42:48 np0005603663 podman[253244]: 2026-01-31 08:42:48.409580314 +0000 UTC m=+0.281433966 container start 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:42:48 np0005603663 systemd[1]: libpod-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope: Deactivated successfully.
Jan 31 03:42:48 np0005603663 stupefied_mendeleev[253260]: 167 167
Jan 31 03:42:48 np0005603663 conmon[253260]: conmon 7f2896a1c39654d86064 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope/container/memory.events
Jan 31 03:42:48 np0005603663 podman[253244]: 2026-01-31 08:42:48.468338546 +0000 UTC m=+0.340192198 container attach 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:42:48 np0005603663 podman[253244]: 2026-01-31 08:42:48.468699336 +0000 UTC m=+0.340552988 container died 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:42:48 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b62712f56f4eee18d905342f7bea3c7b5452f7b89bc92a3e0f84c9154ea5efa8-merged.mount: Deactivated successfully.
Jan 31 03:42:48 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:42:48 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:42:49 np0005603663 podman[253244]: 2026-01-31 08:42:49.036572299 +0000 UTC m=+0.908425951 container remove 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:42:49 np0005603663 systemd[1]: libpod-conmon-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope: Deactivated successfully.
Jan 31 03:42:49 np0005603663 podman[253285]: 2026-01-31 08:42:49.254517667 +0000 UTC m=+0.110596717 container create e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:49 np0005603663 podman[253285]: 2026-01-31 08:42:49.166688883 +0000 UTC m=+0.022767963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:42:49 np0005603663 systemd[1]: Started libpod-conmon-e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21.scope.
Jan 31 03:42:49 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:42:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:49 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:49 np0005603663 podman[253285]: 2026-01-31 08:42:49.417399219 +0000 UTC m=+0.273478369 container init e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 03:42:49 np0005603663 podman[253285]: 2026-01-31 08:42:49.423769971 +0000 UTC m=+0.279849021 container start e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:42:49 np0005603663 podman[253285]: 2026-01-31 08:42:49.512005396 +0000 UTC m=+0.368084476 container attach e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 03:42:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:49 np0005603663 wonderful_darwin[253302]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:42:49 np0005603663 wonderful_darwin[253302]: --> All data devices are unavailable
Jan 31 03:42:49 np0005603663 systemd[1]: libpod-e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21.scope: Deactivated successfully.
Jan 31 03:42:49 np0005603663 podman[253285]: 2026-01-31 08:42:49.831819299 +0000 UTC m=+0.687898359 container died e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 03:42:50 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475-merged.mount: Deactivated successfully.
Jan 31 03:42:50 np0005603663 podman[253285]: 2026-01-31 08:42:50.560946978 +0000 UTC m=+1.417026028 container remove e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:42:50 np0005603663 systemd[1]: libpod-conmon-e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21.scope: Deactivated successfully.
Jan 31 03:42:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:51.020540601 +0000 UTC m=+0.105312736 container create 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:50.935577029 +0000 UTC m=+0.020349204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:42:51 np0005603663 systemd[1]: Started libpod-conmon-5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43.scope.
Jan 31 03:42:51 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:51.247219449 +0000 UTC m=+0.331991594 container init 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:51.252882651 +0000 UTC m=+0.337654776 container start 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:42:51 np0005603663 tender_elgamal[253417]: 167 167
Jan 31 03:42:51 np0005603663 systemd[1]: libpod-5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43.scope: Deactivated successfully.
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:51.335798024 +0000 UTC m=+0.420570179 container attach 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:51.336207505 +0000 UTC m=+0.420979640 container died 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:42:51 np0005603663 systemd[1]: var-lib-containers-storage-overlay-41e98167b715a5dd6d67e4b3b7ac9ddd2cd009919764a52fa1a10507adb81af5-merged.mount: Deactivated successfully.
Jan 31 03:42:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:51 np0005603663 podman[253400]: 2026-01-31 08:42:51.905466458 +0000 UTC m=+0.990238583 container remove 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:42:51 np0005603663 systemd[1]: libpod-conmon-5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43.scope: Deactivated successfully.
Jan 31 03:42:52 np0005603663 podman[253444]: 2026-01-31 08:42:52.062313567 +0000 UTC m=+0.062774397 container create 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:52 np0005603663 podman[253444]: 2026-01-31 08:42:52.019844862 +0000 UTC m=+0.020305712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:42:52 np0005603663 systemd[1]: Started libpod-conmon-54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e.scope.
Jan 31 03:42:52 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:42:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:52 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:52 np0005603663 podman[253444]: 2026-01-31 08:42:52.235455413 +0000 UTC m=+0.235916263 container init 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:42:52 np0005603663 podman[253444]: 2026-01-31 08:42:52.242200696 +0000 UTC m=+0.242661526 container start 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:42:52 np0005603663 podman[253444]: 2026-01-31 08:42:52.314077163 +0000 UTC m=+0.314538013 container attach 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]: {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:    "0": [
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:        {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "devices": [
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "/dev/loop3"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            ],
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_name": "ceph_lv0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_size": "21470642176",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "name": "ceph_lv0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "tags": {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cluster_name": "ceph",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.crush_device_class": "",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.encrypted": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.objectstore": "bluestore",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osd_id": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.type": "block",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.vdo": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.with_tpm": "0"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            },
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "type": "block",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "vg_name": "ceph_vg0"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:        }
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:    ],
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:    "1": [
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:        {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "devices": [
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "/dev/loop4"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            ],
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_name": "ceph_lv1",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_size": "21470642176",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "name": "ceph_lv1",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "tags": {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cluster_name": "ceph",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.crush_device_class": "",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.encrypted": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.objectstore": "bluestore",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osd_id": "1",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.type": "block",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.vdo": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.with_tpm": "0"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            },
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "type": "block",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "vg_name": "ceph_vg1"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:        }
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:    ],
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:    "2": [
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:        {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "devices": [
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "/dev/loop5"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            ],
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_name": "ceph_lv2",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_size": "21470642176",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "name": "ceph_lv2",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "tags": {
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.cluster_name": "ceph",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.crush_device_class": "",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.encrypted": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.objectstore": "bluestore",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osd_id": "2",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.type": "block",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.vdo": "0",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:                "ceph.with_tpm": "0"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            },
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "type": "block",
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:            "vg_name": "ceph_vg2"
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:        }
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]:    ]
Jan 31 03:42:52 np0005603663 funny_kowalevski[253460]: }
Jan 31 03:42:52 np0005603663 systemd[1]: libpod-54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e.scope: Deactivated successfully.
Jan 31 03:42:52 np0005603663 podman[253444]: 2026-01-31 08:42:52.508829847 +0000 UTC m=+0.509290667 container died 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:42:52 np0005603663 systemd[1]: var-lib-containers-storage-overlay-5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade-merged.mount: Deactivated successfully.
Jan 31 03:42:53 np0005603663 podman[253444]: 2026-01-31 08:42:53.209807799 +0000 UTC m=+1.210268629 container remove 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:42:53 np0005603663 systemd[1]: libpod-conmon-54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e.scope: Deactivated successfully.
Jan 31 03:42:53 np0005603663 podman[253543]: 2026-01-31 08:42:53.583285038 +0000 UTC m=+0.019502039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:42:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:53 np0005603663 podman[253543]: 2026-01-31 08:42:53.718612102 +0000 UTC m=+0.154829063 container create f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:42:53 np0005603663 systemd[1]: Started libpod-conmon-f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd.scope.
Jan 31 03:42:53 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:42:54 np0005603663 podman[253543]: 2026-01-31 08:42:54.249446094 +0000 UTC m=+0.685663085 container init f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:42:54 np0005603663 podman[253543]: 2026-01-31 08:42:54.254103367 +0000 UTC m=+0.690320338 container start f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:42:54 np0005603663 admiring_khorana[253559]: 167 167
Jan 31 03:42:54 np0005603663 systemd[1]: libpod-f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd.scope: Deactivated successfully.
Jan 31 03:42:54 np0005603663 podman[253543]: 2026-01-31 08:42:54.4970444 +0000 UTC m=+0.933261401 container attach f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:42:54 np0005603663 podman[253543]: 2026-01-31 08:42:54.498831271 +0000 UTC m=+0.935048232 container died f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:42:54 np0005603663 systemd[1]: var-lib-containers-storage-overlay-16043fdba2cde5259c8979996b724abec2ed07ba6d528d79a2e76300f51a7224-merged.mount: Deactivated successfully.
Jan 31 03:42:55 np0005603663 podman[253543]: 2026-01-31 08:42:55.08021928 +0000 UTC m=+1.516436251 container remove f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:42:55 np0005603663 systemd[1]: libpod-conmon-f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd.scope: Deactivated successfully.
Jan 31 03:42:55 np0005603663 podman[253603]: 2026-01-31 08:42:55.180621984 +0000 UTC m=+0.023433652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:42:55 np0005603663 podman[253603]: 2026-01-31 08:42:55.323324578 +0000 UTC m=+0.166136246 container create 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 03:42:55 np0005603663 podman[253574]: 2026-01-31 08:42:55.330537785 +0000 UTC m=+1.037678700 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:42:55 np0005603663 podman[253565]: 2026-01-31 08:42:55.431672479 +0000 UTC m=+1.140971606 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:42:55 np0005603663 systemd[1]: Started libpod-conmon-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope.
Jan 31 03:42:55 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:42:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:55 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:55 np0005603663 podman[253603]: 2026-01-31 08:42:55.573028815 +0000 UTC m=+0.415840503 container init 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 03:42:55 np0005603663 podman[253603]: 2026-01-31 08:42:55.580334274 +0000 UTC m=+0.423145942 container start 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:42:55 np0005603663 podman[253603]: 2026-01-31 08:42:55.648237458 +0000 UTC m=+0.491049136 container attach 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:56 np0005603663 lvm[253720]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:42:56 np0005603663 lvm[253717]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:42:56 np0005603663 lvm[253717]: VG ceph_vg0 finished
Jan 31 03:42:56 np0005603663 lvm[253720]: VG ceph_vg1 finished
Jan 31 03:42:56 np0005603663 lvm[253722]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:42:56 np0005603663 lvm[253722]: VG ceph_vg2 finished
Jan 31 03:42:56 np0005603663 beautiful_gould[253640]: {}
Jan 31 03:42:56 np0005603663 systemd[1]: libpod-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope: Deactivated successfully.
Jan 31 03:42:56 np0005603663 systemd[1]: libpod-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope: Consumed 1.103s CPU time.
Jan 31 03:42:56 np0005603663 podman[253603]: 2026-01-31 08:42:56.369630724 +0000 UTC m=+1.212442392 container died 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:42:56 np0005603663 systemd[1]: var-lib-containers-storage-overlay-36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78-merged.mount: Deactivated successfully.
Jan 31 03:42:57 np0005603663 podman[253603]: 2026-01-31 08:42:57.263007274 +0000 UTC m=+2.105818962 container remove 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:57 np0005603663 systemd[1]: libpod-conmon-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope: Deactivated successfully.
Jan 31 03:42:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:42:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:42:57 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:42:57 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:42:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:42:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:42:58 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:42:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:43:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:43:17.903 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:43:17.903 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:43:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713800401' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:43:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:43:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713800401' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:43:19 np0005603663 nova_compute[238824]: 2026-01-31 08:43:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:22 np0005603663 nova_compute[238824]: 2026-01-31 08:43:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:22 np0005603663 nova_compute[238824]: 2026-01-31 08:43:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:22 np0005603663 nova_compute[238824]: 2026-01-31 08:43:22.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:22 np0005603663 nova_compute[238824]: 2026-01-31 08:43:22.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:43:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:26 np0005603663 podman[253765]: 2026-01-31 08:43:26.185592234 +0000 UTC m=+0.049941331 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:43:26 np0005603663 podman[253764]: 2026-01-31 08:43:26.232139136 +0000 UTC m=+0.098962854 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:43:26 np0005603663 nova_compute[238824]: 2026-01-31 08:43:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:26 np0005603663 nova_compute[238824]: 2026-01-31 08:43:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:43:26 np0005603663 nova_compute[238824]: 2026-01-31 08:43:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:43:26 np0005603663 nova_compute[238824]: 2026-01-31 08:43:26.367 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:43:26 np0005603663 nova_compute[238824]: 2026-01-31 08:43:26.367 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.368 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:43:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709318317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:43:27 np0005603663 nova_compute[238824]: 2026-01-31 08:43:27.928 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.083 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.084 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.085 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.085 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.163 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.164 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.180 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:43:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317742371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.703 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.709 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.734 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.736 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:43:28 np0005603663 nova_compute[238824]: 2026-01-31 08:43:28.736 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:29 np0005603663 nova_compute[238824]: 2026-01-31 08:43:29.731 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:43:31
Jan 31 03:43:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:43:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:43:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images']
Jan 31 03:43:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:43:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:43:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:43:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.107221) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030107342, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 947, "num_deletes": 251, "total_data_size": 1340247, "memory_usage": 1364496, "flush_reason": "Manual Compaction"}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030133798, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1327609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24782, "largest_seqno": 25728, "table_properties": {"data_size": 1322899, "index_size": 2298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10153, "raw_average_key_size": 19, "raw_value_size": 1313514, "raw_average_value_size": 2535, "num_data_blocks": 103, "num_entries": 518, "num_filter_entries": 518, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848941, "oldest_key_time": 1769848941, "file_creation_time": 1769849030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 26663 microseconds, and 2970 cpu microseconds.
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.133890) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1327609 bytes OK
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.133915) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.142506) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.142550) EVENT_LOG_v1 {"time_micros": 1769849030142541, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.142576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1335719, prev total WAL file size 1335719, number of live WAL files 2.
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.143145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1296KB)], [56(8154KB)]
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030143208, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9678042, "oldest_snapshot_seqno": -1}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4735 keys, 7904189 bytes, temperature: kUnknown
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030381279, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7904189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7871966, "index_size": 19313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118665, "raw_average_key_size": 25, "raw_value_size": 7785634, "raw_average_value_size": 1644, "num_data_blocks": 797, "num_entries": 4735, "num_filter_entries": 4735, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.381727) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7904189 bytes
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.385612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.6 rd, 33.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 5249, records dropped: 514 output_compression: NoCompression
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.385641) EVENT_LOG_v1 {"time_micros": 1769849030385628, "job": 30, "event": "compaction_finished", "compaction_time_micros": 238378, "compaction_time_cpu_micros": 13796, "output_level": 6, "num_output_files": 1, "total_output_size": 7904189, "num_input_records": 5249, "num_output_records": 4735, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030386132, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030387684, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.143055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:43:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:57 np0005603663 podman[253854]: 2026-01-31 08:43:57.171092923 +0000 UTC m=+0.068198253 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 03:43:57 np0005603663 podman[253855]: 2026-01-31 08:43:57.173034048 +0000 UTC m=+0.063938410 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:43:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:43:58 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:43:58 np0005603663 podman[254042]: 2026-01-31 08:43:58.80900213 +0000 UTC m=+0.023505843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:43:58 np0005603663 podman[254042]: 2026-01-31 08:43:58.95605385 +0000 UTC m=+0.170557573 container create b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:43:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:43:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:43:59 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:43:59 np0005603663 systemd[1]: Started libpod-conmon-b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499.scope.
Jan 31 03:43:59 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:43:59 np0005603663 podman[254042]: 2026-01-31 08:43:59.157428003 +0000 UTC m=+0.371931726 container init b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:43:59 np0005603663 podman[254042]: 2026-01-31 08:43:59.16538046 +0000 UTC m=+0.379884133 container start b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:43:59 np0005603663 gifted_chaum[254059]: 167 167
Jan 31 03:43:59 np0005603663 systemd[1]: libpod-b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499.scope: Deactivated successfully.
Jan 31 03:43:59 np0005603663 podman[254042]: 2026-01-31 08:43:59.206435895 +0000 UTC m=+0.420939628 container attach b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:43:59 np0005603663 podman[254042]: 2026-01-31 08:43:59.207023892 +0000 UTC m=+0.421527595 container died b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:43:59 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e81985b7de4e1079e9b46e13fef5a5c057a373a0048a22ef175338e84761e257-merged.mount: Deactivated successfully.
Jan 31 03:43:59 np0005603663 podman[254042]: 2026-01-31 08:43:59.715621878 +0000 UTC m=+0.930125561 container remove b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:43:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:43:59 np0005603663 systemd[1]: libpod-conmon-b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499.scope: Deactivated successfully.
Jan 31 03:43:59 np0005603663 podman[254084]: 2026-01-31 08:43:59.81666497 +0000 UTC m=+0.024266915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:43:59 np0005603663 podman[254084]: 2026-01-31 08:43:59.917077044 +0000 UTC m=+0.124678979 container create b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:43:59 np0005603663 systemd[1]: Started libpod-conmon-b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb.scope.
Jan 31 03:44:00 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:44:00 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:00 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:00 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:00 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:00 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:00 np0005603663 podman[254084]: 2026-01-31 08:44:00.120818555 +0000 UTC m=+0.328420530 container init b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:44:00 np0005603663 podman[254084]: 2026-01-31 08:44:00.127235859 +0000 UTC m=+0.334837804 container start b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:44:00 np0005603663 podman[254084]: 2026-01-31 08:44:00.205106918 +0000 UTC m=+0.412708853 container attach b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:44:00 np0005603663 jolly_keller[254101]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:44:00 np0005603663 jolly_keller[254101]: --> All data devices are unavailable
Jan 31 03:44:00 np0005603663 systemd[1]: libpod-b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb.scope: Deactivated successfully.
Jan 31 03:44:00 np0005603663 podman[254084]: 2026-01-31 08:44:00.583490167 +0000 UTC m=+0.791092092 container died b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:44:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:01 np0005603663 systemd[1]: var-lib-containers-storage-overlay-3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144-merged.mount: Deactivated successfully.
Jan 31 03:44:01 np0005603663 podman[254084]: 2026-01-31 08:44:01.697859011 +0000 UTC m=+1.905460946 container remove b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 03:44:01 np0005603663 systemd[1]: libpod-conmon-b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb.scope: Deactivated successfully.
Jan 31 03:44:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.127662743 +0000 UTC m=+0.023893585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.344664003 +0000 UTC m=+0.240894855 container create 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:44:02 np0005603663 systemd[1]: Started libpod-conmon-1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd.scope.
Jan 31 03:44:02 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:44:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.850993174 +0000 UTC m=+0.747224046 container init 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.854853934 +0000 UTC m=+0.751084756 container start 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:44:02 np0005603663 intelligent_liskov[254216]: 167 167
Jan 31 03:44:02 np0005603663 systemd[1]: libpod-1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd.scope: Deactivated successfully.
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.893867581 +0000 UTC m=+0.790098413 container attach 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.894971993 +0000 UTC m=+0.791202845 container died 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 03:44:02 np0005603663 systemd[1]: var-lib-containers-storage-overlay-34290d84ddc8ac9c5fefc5d2aaec646d5467245115818f019d89534344b3875d-merged.mount: Deactivated successfully.
Jan 31 03:44:02 np0005603663 podman[254200]: 2026-01-31 08:44:02.957369919 +0000 UTC m=+0.853600741 container remove 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:44:02 np0005603663 systemd[1]: libpod-conmon-1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd.scope: Deactivated successfully.
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.095519173 +0000 UTC m=+0.049649783 container create a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:44:03 np0005603663 systemd[1]: Started libpod-conmon-a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58.scope.
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.070128116 +0000 UTC m=+0.024258746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:44:03 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:44:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:03 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.196470102 +0000 UTC m=+0.150600732 container init a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.203168383 +0000 UTC m=+0.157298993 container start a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.209372541 +0000 UTC m=+0.163503171 container attach a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:44:03 np0005603663 elated_hugle[254256]: {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:    "0": [
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:        {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "devices": [
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "/dev/loop3"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            ],
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_name": "ceph_lv0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_size": "21470642176",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "name": "ceph_lv0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "tags": {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cluster_name": "ceph",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.crush_device_class": "",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.encrypted": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.objectstore": "bluestore",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osd_id": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.type": "block",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.vdo": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.with_tpm": "0"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            },
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "type": "block",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "vg_name": "ceph_vg0"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:        }
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:    ],
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:    "1": [
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:        {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "devices": [
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "/dev/loop4"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            ],
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_name": "ceph_lv1",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_size": "21470642176",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "name": "ceph_lv1",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "tags": {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cluster_name": "ceph",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.crush_device_class": "",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.encrypted": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.objectstore": "bluestore",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osd_id": "1",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.type": "block",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.vdo": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.with_tpm": "0"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            },
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "type": "block",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "vg_name": "ceph_vg1"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:        }
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:    ],
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:    "2": [
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:        {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "devices": [
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "/dev/loop5"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            ],
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_name": "ceph_lv2",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_size": "21470642176",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "name": "ceph_lv2",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "tags": {
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.cluster_name": "ceph",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.crush_device_class": "",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.encrypted": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.objectstore": "bluestore",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osd_id": "2",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.type": "block",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.vdo": "0",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:                "ceph.with_tpm": "0"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            },
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "type": "block",
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:            "vg_name": "ceph_vg2"
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:        }
Jan 31 03:44:03 np0005603663 elated_hugle[254256]:    ]
Jan 31 03:44:03 np0005603663 elated_hugle[254256]: }
Jan 31 03:44:03 np0005603663 systemd[1]: libpod-a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58.scope: Deactivated successfully.
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.494738538 +0000 UTC m=+0.448869148 container died a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:03 np0005603663 systemd[1]: var-lib-containers-storage-overlay-37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c-merged.mount: Deactivated successfully.
Jan 31 03:44:03 np0005603663 podman[254240]: 2026-01-31 08:44:03.592726963 +0000 UTC m=+0.546857573 container remove a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:44:03 np0005603663 systemd[1]: libpod-conmon-a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58.scope: Deactivated successfully.
Jan 31 03:44:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.012447246 +0000 UTC m=+0.025273075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.137937047 +0000 UTC m=+0.150762856 container create 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:44:04 np0005603663 systemd[1]: Started libpod-conmon-2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382.scope.
Jan 31 03:44:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.223306001 +0000 UTC m=+0.236131840 container init 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.229370504 +0000 UTC m=+0.242196313 container start 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:44:04 np0005603663 confident_bhaskara[254358]: 167 167
Jan 31 03:44:04 np0005603663 systemd[1]: libpod-2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382.scope: Deactivated successfully.
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.238330211 +0000 UTC m=+0.251156040 container attach 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.238716512 +0000 UTC m=+0.251542321 container died 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:04 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bda1cdda8a2b77d5e3d87a9bdb0452dfdeb94d80571d18d9e4aad52013f98ddd-merged.mount: Deactivated successfully.
Jan 31 03:44:04 np0005603663 podman[254341]: 2026-01-31 08:44:04.297853414 +0000 UTC m=+0.310679223 container remove 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:44:04 np0005603663 systemd[1]: libpod-conmon-2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382.scope: Deactivated successfully.
Jan 31 03:44:04 np0005603663 podman[254383]: 2026-01-31 08:44:04.417096267 +0000 UTC m=+0.041248222 container create b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:04 np0005603663 systemd[1]: Started libpod-conmon-b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57.scope.
Jan 31 03:44:04 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:44:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:04 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:04 np0005603663 podman[254383]: 2026-01-31 08:44:04.488682516 +0000 UTC m=+0.112834481 container init b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 03:44:04 np0005603663 podman[254383]: 2026-01-31 08:44:04.394868141 +0000 UTC m=+0.019020116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:44:04 np0005603663 podman[254383]: 2026-01-31 08:44:04.497066336 +0000 UTC m=+0.121218281 container start b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:44:04 np0005603663 podman[254383]: 2026-01-31 08:44:04.50174822 +0000 UTC m=+0.125900225 container attach b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:44:05 np0005603663 lvm[254479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:44:05 np0005603663 lvm[254480]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:44:05 np0005603663 lvm[254480]: VG ceph_vg1 finished
Jan 31 03:44:05 np0005603663 lvm[254479]: VG ceph_vg0 finished
Jan 31 03:44:05 np0005603663 lvm[254482]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:44:05 np0005603663 lvm[254482]: VG ceph_vg2 finished
Jan 31 03:44:05 np0005603663 clever_galois[254400]: {}
Jan 31 03:44:05 np0005603663 systemd[1]: libpod-b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57.scope: Deactivated successfully.
Jan 31 03:44:05 np0005603663 podman[254383]: 2026-01-31 08:44:05.178972552 +0000 UTC m=+0.803124528 container died b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:44:05 np0005603663 systemd[1]: var-lib-containers-storage-overlay-cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138-merged.mount: Deactivated successfully.
Jan 31 03:44:05 np0005603663 podman[254383]: 2026-01-31 08:44:05.234641846 +0000 UTC m=+0.858793791 container remove b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:44:05 np0005603663 systemd[1]: libpod-conmon-b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57.scope: Deactivated successfully.
Jan 31 03:44:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:44:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:44:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:44:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:44:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:44:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:44:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:44:17.902 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:44:17.905 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:44:17.905 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:44:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3519593418' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:44:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:44:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3519593418' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:44:19 np0005603663 nova_compute[238824]: 2026-01-31 08:44:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:19 np0005603663 nova_compute[238824]: 2026-01-31 08:44:19.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:44:19 np0005603663 nova_compute[238824]: 2026-01-31 08:44:19.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:44:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:20 np0005603663 nova_compute[238824]: 2026-01-31 08:44:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:21 np0005603663 nova_compute[238824]: 2026-01-31 08:44:21.354 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:22 np0005603663 nova_compute[238824]: 2026-01-31 08:44:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:22 np0005603663 nova_compute[238824]: 2026-01-31 08:44:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:22 np0005603663 nova_compute[238824]: 2026-01-31 08:44:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:44:23 np0005603663 nova_compute[238824]: 2026-01-31 08:44:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:25 np0005603663 nova_compute[238824]: 2026-01-31 08:44:25.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:26 np0005603663 nova_compute[238824]: 2026-01-31 08:44:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:26 np0005603663 nova_compute[238824]: 2026-01-31 08:44:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:44:26 np0005603663 nova_compute[238824]: 2026-01-31 08:44:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:44:26 np0005603663 nova_compute[238824]: 2026-01-31 08:44:26.359 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:44:26 np0005603663 nova_compute[238824]: 2026-01-31 08:44:26.360 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:26 np0005603663 nova_compute[238824]: 2026-01-31 08:44:26.360 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:44:27 np0005603663 nova_compute[238824]: 2026-01-31 08:44:27.353 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:27 np0005603663 nova_compute[238824]: 2026-01-31 08:44:27.353 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:28 np0005603663 podman[254525]: 2026-01-31 08:44:28.157830668 +0000 UTC m=+0.052428672 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:44:28 np0005603663 podman[254524]: 2026-01-31 08:44:28.186429806 +0000 UTC m=+0.081124222 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.368 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.368 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:44:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694860644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:44:28 np0005603663 nova_compute[238824]: 2026-01-31 08:44:28.892 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.045 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.046 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.046 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.047 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.520 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.521 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:44:29 np0005603663 nova_compute[238824]: 2026-01-31 08:44:29.538 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:44:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350916459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:44:30 np0005603663 nova_compute[238824]: 2026-01-31 08:44:30.088 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:30 np0005603663 nova_compute[238824]: 2026-01-31 08:44:30.093 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:44:30 np0005603663 nova_compute[238824]: 2026-01-31 08:44:30.110 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:44:30 np0005603663 nova_compute[238824]: 2026-01-31 08:44:30.111 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:44:30 np0005603663 nova_compute[238824]: 2026-01-31 08:44:30.112 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:30 np0005603663 systemd-logind[793]: New session 52 of user zuul.
Jan 31 03:44:30 np0005603663 systemd[1]: Started Session 52 of User zuul.
Jan 31 03:44:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:31 np0005603663 nova_compute[238824]: 2026-01-31 08:44:31.107 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:44:31
Jan 31 03:44:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:44:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:44:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta']
Jan 31 03:44:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:44:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:33 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:44:33.053 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:5f:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:1b:f0:08:31:5c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:44:33 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:44:33.055 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:44:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:36 np0005603663 systemd[1]: session-52.scope: Deactivated successfully.
Jan 31 03:44:36 np0005603663 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Jan 31 03:44:36 np0005603663 systemd-logind[793]: Removed session 52.
Jan 31 03:44:37 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:44:37.057 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:44:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:49 np0005603663 nova_compute[238824]: 2026-01-31 08:44:49.353 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:44:59 np0005603663 podman[254871]: 2026-01-31 08:44:59.176478216 +0000 UTC m=+0.056005854 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:44:59 np0005603663 podman[254870]: 2026-01-31 08:44:59.198005142 +0000 UTC m=+0.084067977 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:44:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:45:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.453642033 +0000 UTC m=+0.033190101 container create 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:45:06 np0005603663 systemd[1]: Started libpod-conmon-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope.
Jan 31 03:45:06 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.531356257 +0000 UTC m=+0.110904345 container init 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.439397865 +0000 UTC m=+0.018945953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.53706179 +0000 UTC m=+0.116609858 container start 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.5401923 +0000 UTC m=+0.119740418 container attach 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:45:06 np0005603663 systemd[1]: libpod-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope: Deactivated successfully.
Jan 31 03:45:06 np0005603663 gifted_banach[255076]: 167 167
Jan 31 03:45:06 np0005603663 conmon[255076]: conmon 8ac3fb3ff9fd89327129 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope/container/memory.events
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.542201987 +0000 UTC m=+0.121750065 container died 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:45:06 np0005603663 systemd[1]: var-lib-containers-storage-overlay-596ba08bf2a296dda3c8ae1c5fe2aa70de7a039107a0c557efb43d1b6710989e-merged.mount: Deactivated successfully.
Jan 31 03:45:06 np0005603663 podman[255060]: 2026-01-31 08:45:06.59226929 +0000 UTC m=+0.171817348 container remove 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 03:45:06 np0005603663 systemd[1]: libpod-conmon-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope: Deactivated successfully.
Jan 31 03:45:06 np0005603663 podman[255101]: 2026-01-31 08:45:06.695971317 +0000 UTC m=+0.032969134 container create 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:45:06 np0005603663 systemd[1]: Started libpod-conmon-1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf.scope.
Jan 31 03:45:06 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:45:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:06 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:06 np0005603663 podman[255101]: 2026-01-31 08:45:06.767467074 +0000 UTC m=+0.104464891 container init 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:45:06 np0005603663 podman[255101]: 2026-01-31 08:45:06.775316938 +0000 UTC m=+0.112314755 container start 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:45:06 np0005603663 podman[255101]: 2026-01-31 08:45:06.682459041 +0000 UTC m=+0.019456878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:45:06 np0005603663 podman[255101]: 2026-01-31 08:45:06.803887826 +0000 UTC m=+0.140885633 container attach 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:45:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:45:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:45:06 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:45:07 np0005603663 interesting_yonath[255117]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:45:07 np0005603663 interesting_yonath[255117]: --> All data devices are unavailable
Jan 31 03:45:07 np0005603663 systemd[1]: libpod-1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf.scope: Deactivated successfully.
Jan 31 03:45:07 np0005603663 podman[255101]: 2026-01-31 08:45:07.236508718 +0000 UTC m=+0.573506545 container died 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:45:07 np0005603663 systemd[1]: var-lib-containers-storage-overlay-29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c-merged.mount: Deactivated successfully.
Jan 31 03:45:07 np0005603663 podman[255101]: 2026-01-31 08:45:07.276611036 +0000 UTC m=+0.613608853 container remove 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:45:07 np0005603663 systemd[1]: libpod-conmon-1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf.scope: Deactivated successfully.
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.661867952 +0000 UTC m=+0.035848767 container create 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:45:07 np0005603663 systemd[1]: Started libpod-conmon-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope.
Jan 31 03:45:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.714405476 +0000 UTC m=+0.088386311 container init 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.720175021 +0000 UTC m=+0.094155836 container start 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:45:07 np0005603663 clever_wozniak[255227]: 167 167
Jan 31 03:45:07 np0005603663 systemd[1]: libpod-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope: Deactivated successfully.
Jan 31 03:45:07 np0005603663 conmon[255227]: conmon 22cff697f383b6a84dfb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope/container/memory.events
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.724848165 +0000 UTC m=+0.098828980 container attach 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.72540578 +0000 UTC m=+0.099386595 container died 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.645760131 +0000 UTC m=+0.019740966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:45:07 np0005603663 systemd[1]: var-lib-containers-storage-overlay-982c652eed57059d6e26e352ea273ee27873d068729defe6d458fe4ea59a289d-merged.mount: Deactivated successfully.
Jan 31 03:45:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:07 np0005603663 podman[255211]: 2026-01-31 08:45:07.75893091 +0000 UTC m=+0.132911725 container remove 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:45:07 np0005603663 systemd[1]: libpod-conmon-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope: Deactivated successfully.
Jan 31 03:45:07 np0005603663 podman[255251]: 2026-01-31 08:45:07.891746841 +0000 UTC m=+0.037101613 container create 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 31 03:45:07 np0005603663 systemd[1]: Started libpod-conmon-4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba.scope.
Jan 31 03:45:07 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:45:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:07 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:07 np0005603663 podman[255251]: 2026-01-31 08:45:07.874421455 +0000 UTC m=+0.019776257 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:45:07 np0005603663 podman[255251]: 2026-01-31 08:45:07.974680625 +0000 UTC m=+0.120035427 container init 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:45:07 np0005603663 podman[255251]: 2026-01-31 08:45:07.980023598 +0000 UTC m=+0.125378410 container start 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:45:07 np0005603663 podman[255251]: 2026-01-31 08:45:07.98427346 +0000 UTC m=+0.129628262 container attach 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]: {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:    "0": [
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:        {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "devices": [
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "/dev/loop3"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            ],
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_name": "ceph_lv0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_size": "21470642176",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "name": "ceph_lv0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "tags": {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cluster_name": "ceph",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.crush_device_class": "",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.encrypted": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.objectstore": "bluestore",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osd_id": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.type": "block",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.vdo": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.with_tpm": "0"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            },
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "type": "block",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "vg_name": "ceph_vg0"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:        }
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:    ],
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:    "1": [
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:        {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "devices": [
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "/dev/loop4"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            ],
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_name": "ceph_lv1",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_size": "21470642176",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "name": "ceph_lv1",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "tags": {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cluster_name": "ceph",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.crush_device_class": "",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.encrypted": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.objectstore": "bluestore",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osd_id": "1",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.type": "block",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.vdo": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.with_tpm": "0"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            },
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "type": "block",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "vg_name": "ceph_vg1"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:        }
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:    ],
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:    "2": [
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:        {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "devices": [
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "/dev/loop5"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            ],
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_name": "ceph_lv2",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_size": "21470642176",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "name": "ceph_lv2",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "tags": {
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.cluster_name": "ceph",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.crush_device_class": "",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.encrypted": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.objectstore": "bluestore",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osd_id": "2",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.type": "block",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.vdo": "0",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:                "ceph.with_tpm": "0"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            },
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "type": "block",
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:            "vg_name": "ceph_vg2"
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:        }
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]:    ]
Jan 31 03:45:08 np0005603663 hungry_rhodes[255267]: }
Jan 31 03:45:08 np0005603663 systemd[1]: libpod-4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba.scope: Deactivated successfully.
Jan 31 03:45:08 np0005603663 podman[255251]: 2026-01-31 08:45:08.298339328 +0000 UTC m=+0.443694140 container died 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:45:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627-merged.mount: Deactivated successfully.
Jan 31 03:45:08 np0005603663 podman[255251]: 2026-01-31 08:45:08.345325343 +0000 UTC m=+0.490680135 container remove 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:45:08 np0005603663 systemd[1]: libpod-conmon-4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba.scope: Deactivated successfully.
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.751716444 +0000 UTC m=+0.038052410 container create 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:45:08 np0005603663 systemd[1]: Started libpod-conmon-9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254.scope.
Jan 31 03:45:08 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.806649676 +0000 UTC m=+0.092985702 container init 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.813910104 +0000 UTC m=+0.100246070 container start 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:45:08 np0005603663 stupefied_faraday[255367]: 167 167
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.817558069 +0000 UTC m=+0.103894035 container attach 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:45:08 np0005603663 systemd[1]: libpod-9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254.scope: Deactivated successfully.
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.819293428 +0000 UTC m=+0.105629404 container died 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.735383877 +0000 UTC m=+0.021719853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:45:08 np0005603663 systemd[1]: var-lib-containers-storage-overlay-49f9493aae9a2ef63689c0cc3777a3950ff17cba1188eead46a37f13e9a68b7f-merged.mount: Deactivated successfully.
Jan 31 03:45:08 np0005603663 podman[255350]: 2026-01-31 08:45:08.861716563 +0000 UTC m=+0.148052559 container remove 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:45:08 np0005603663 systemd[1]: libpod-conmon-9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254.scope: Deactivated successfully.
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.025128969 +0000 UTC m=+0.036857295 container create b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:45:09 np0005603663 systemd[1]: Started libpod-conmon-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope.
Jan 31 03:45:09 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:45:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:09 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.009598245 +0000 UTC m=+0.021326601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.110124652 +0000 UTC m=+0.121852988 container init b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.114503047 +0000 UTC m=+0.126231373 container start b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.117521054 +0000 UTC m=+0.129249410 container attach b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 03:45:09 np0005603663 lvm[255486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:45:09 np0005603663 lvm[255486]: VG ceph_vg0 finished
Jan 31 03:45:09 np0005603663 lvm[255487]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:45:09 np0005603663 lvm[255487]: VG ceph_vg1 finished
Jan 31 03:45:09 np0005603663 lvm[255489]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:45:09 np0005603663 lvm[255489]: VG ceph_vg2 finished
Jan 31 03:45:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:09 np0005603663 happy_engelbart[255408]: {}
Jan 31 03:45:09 np0005603663 systemd[1]: libpod-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope: Deactivated successfully.
Jan 31 03:45:09 np0005603663 systemd[1]: libpod-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope: Consumed 1.051s CPU time.
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.830757297 +0000 UTC m=+0.842485663 container died b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:45:09 np0005603663 systemd[1]: var-lib-containers-storage-overlay-731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32-merged.mount: Deactivated successfully.
Jan 31 03:45:09 np0005603663 podman[255391]: 2026-01-31 08:45:09.954067556 +0000 UTC m=+0.965795882 container remove b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:45:09 np0005603663 systemd[1]: libpod-conmon-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope: Deactivated successfully.
Jan 31 03:45:09 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:45:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:45:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:45:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:45:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:45:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:45:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:45:17.904 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:45:17.905 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:45:17.906 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:45:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/375406498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:45:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:45:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/375406498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:45:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:21 np0005603663 nova_compute[238824]: 2026-01-31 08:45:21.366 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:22 np0005603663 nova_compute[238824]: 2026-01-31 08:45:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:22 np0005603663 nova_compute[238824]: 2026-01-31 08:45:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:45:23 np0005603663 nova_compute[238824]: 2026-01-31 08:45:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:23 np0005603663 nova_compute[238824]: 2026-01-31 08:45:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:28 np0005603663 nova_compute[238824]: 2026-01-31 08:45:28.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:28 np0005603663 nova_compute[238824]: 2026-01-31 08:45:28.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:45:28 np0005603663 nova_compute[238824]: 2026-01-31 08:45:28.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:45:28 np0005603663 nova_compute[238824]: 2026-01-31 08:45:28.354 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:45:28 np0005603663 nova_compute[238824]: 2026-01-31 08:45:28.354 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:29 np0005603663 nova_compute[238824]: 2026-01-31 08:45:29.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:30 np0005603663 podman[255531]: 2026-01-31 08:45:30.1966718 +0000 UTC m=+0.080516505 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 03:45:30 np0005603663 podman[255530]: 2026-01-31 08:45:30.202151657 +0000 UTC m=+0.086442195 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.373 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.373 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:45:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821916903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:45:30 np0005603663 nova_compute[238824]: 2026-01-31 08:45:30.925 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.071 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.073 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5093MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.220 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.221 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.282 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.338 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.338 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.357 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.379 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.396 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:45:31
Jan 31 03:45:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:45:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:45:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Jan 31 03:45:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:45:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:45:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646191734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.987 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:31 np0005603663 nova_compute[238824]: 2026-01-31 08:45:31.992 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:45:32 np0005603663 nova_compute[238824]: 2026-01-31 08:45:32.032 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:45:32 np0005603663 nova_compute[238824]: 2026-01-31 08:45:32.034 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:45:32 np0005603663 nova_compute[238824]: 2026-01-31 08:45:32.034 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:33 np0005603663 nova_compute[238824]: 2026-01-31 08:45:33.029 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:45:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:42 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:45:42.010 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:5f:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:1b:f0:08:31:5c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:45:42 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:45:42.012 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:45:42 np0005603663 systemd-logind[793]: New session 53 of user zuul.
Jan 31 03:45:42 np0005603663 systemd[1]: Started Session 53 of User zuul.
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:45:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:46 np0005603663 systemd-logind[793]: New session 54 of user zuul.
Jan 31 03:45:46 np0005603663 systemd[1]: Started Session 54 of User zuul.
Jan 31 03:45:47 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:45:47.013 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:47 np0005603663 systemd[1]: Reloading.
Jan 31 03:45:47 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:45:47 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:45:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:48 np0005603663 systemd[1]: Reloading.
Jan 31 03:45:48 np0005603663 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 03:45:48 np0005603663 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 03:45:48 np0005603663 systemd[1]: Starting Podman API Socket...
Jan 31 03:45:48 np0005603663 systemd[1]: Listening on Podman API Socket.
Jan 31 03:45:48 np0005603663 dbus-broker-launch[778]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Jan 31 03:45:48 np0005603663 systemd[1]: podman.socket: Deactivated successfully.
Jan 31 03:45:48 np0005603663 systemd[1]: Closed Podman API Socket.
Jan 31 03:45:48 np0005603663 systemd[1]: Stopping Podman API Socket...
Jan 31 03:45:48 np0005603663 systemd[1]: Starting Podman API Socket...
Jan 31 03:45:48 np0005603663 systemd[1]: Listening on Podman API Socket.
Jan 31 03:45:48 np0005603663 systemd-logind[793]: New session 55 of user zuul.
Jan 31 03:45:48 np0005603663 systemd[1]: Started Session 55 of User zuul.
Jan 31 03:45:48 np0005603663 systemd[1]: Starting Podman API Service...
Jan 31 03:45:48 np0005603663 systemd[1]: Started Podman API Service.
Jan 31 03:45:48 np0005603663 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 31 03:45:48 np0005603663 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Setting parallel job count to 25"
Jan 31 03:45:48 np0005603663 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Using sqlite as database backend"
Jan 31 03:45:48 np0005603663 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 31 03:45:48 np0005603663 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 31 03:45:48 np0005603663 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 31 03:45:48 np0005603663 podman[256021]: @ - - [31/Jan/2026:08:45:48 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 03:45:48 np0005603663 podman[256021]: @ - - [31/Jan/2026:08:45:48 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22535 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 03:45:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:45:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:01 np0005603663 podman[256032]: 2026-01-31 08:46:01.158960097 +0000 UTC m=+0.046205323 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:46:01 np0005603663 podman[256031]: 2026-01-31 08:46:01.178336692 +0000 UTC m=+0.066383871 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 03:46:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:03 np0005603663 podman[256021]: time="2026-01-31T08:46:03Z" level=info msg="Received shutdown.Stop(), terminating!" PID=256021
Jan 31 03:46:03 np0005603663 systemd[1]: podman.service: Deactivated successfully.
Jan 31 03:46:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:46:10 np0005603663 podman[256215]: 2026-01-31 08:46:10.914067595 +0000 UTC m=+0.030541865 container create 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:46:10 np0005603663 systemd[1]: Started libpod-conmon-0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c.scope.
Jan 31 03:46:10 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:46:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:10 np0005603663 podman[256215]: 2026-01-31 08:46:10.988490895 +0000 UTC m=+0.104965185 container init 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:46:10 np0005603663 podman[256215]: 2026-01-31 08:46:10.995133735 +0000 UTC m=+0.111608005 container start 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:46:10 np0005603663 podman[256215]: 2026-01-31 08:46:10.901397673 +0000 UTC m=+0.017871963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:46:10 np0005603663 sleepy_jang[256232]: 167 167
Jan 31 03:46:10 np0005603663 systemd[1]: libpod-0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c.scope: Deactivated successfully.
Jan 31 03:46:11 np0005603663 podman[256215]: 2026-01-31 08:46:10.999710016 +0000 UTC m=+0.116184326 container attach 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:46:11 np0005603663 podman[256215]: 2026-01-31 08:46:11.000078187 +0000 UTC m=+0.116552487 container died 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:46:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2bc3d128a2e40fa9de6c7add9dfeacb99564f59a73651dbd16ec012a274392cf-merged.mount: Deactivated successfully.
Jan 31 03:46:11 np0005603663 podman[256215]: 2026-01-31 08:46:11.039849115 +0000 UTC m=+0.156323395 container remove 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:11 np0005603663 systemd[1]: libpod-conmon-0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c.scope: Deactivated successfully.
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.171376349 +0000 UTC m=+0.046621556 container create a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:46:11 np0005603663 systemd[1]: Started libpod-conmon-a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192.scope.
Jan 31 03:46:11 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:46:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:11 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.153410214 +0000 UTC m=+0.028655411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.26855775 +0000 UTC m=+0.143803017 container init a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.27763641 +0000 UTC m=+0.152881617 container start a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.283386784 +0000 UTC m=+0.158631981 container attach a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:46:11 np0005603663 affectionate_mendeleev[256272]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:46:11 np0005603663 affectionate_mendeleev[256272]: --> All data devices are unavailable
Jan 31 03:46:11 np0005603663 systemd[1]: libpod-a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192.scope: Deactivated successfully.
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.72174121 +0000 UTC m=+0.596986397 container died a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 31 03:46:11 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a-merged.mount: Deactivated successfully.
Jan 31 03:46:11 np0005603663 podman[256256]: 2026-01-31 08:46:11.765460952 +0000 UTC m=+0.640706109 container remove a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:11 np0005603663 systemd[1]: libpod-conmon-a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192.scope: Deactivated successfully.
Jan 31 03:46:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:11 np0005603663 systemd[1]: session-53.scope: Deactivated successfully.
Jan 31 03:46:11 np0005603663 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Jan 31 03:46:11 np0005603663 systemd-logind[793]: Removed session 53.
Jan 31 03:46:11 np0005603663 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 03:46:11 np0005603663 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Jan 31 03:46:11 np0005603663 systemd-logind[793]: Removed session 54.
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.126824424 +0000 UTC m=+0.032389368 container create d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:46:12 np0005603663 systemd[1]: Started libpod-conmon-d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59.scope.
Jan 31 03:46:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.201574364 +0000 UTC m=+0.107139318 container init d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.206236727 +0000 UTC m=+0.111801671 container start d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.112684529 +0000 UTC m=+0.018249493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.209363626 +0000 UTC m=+0.114928570 container attach d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:46:12 np0005603663 hopeful_nash[256432]: 167 167
Jan 31 03:46:12 np0005603663 systemd[1]: libpod-d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59.scope: Deactivated successfully.
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.212469405 +0000 UTC m=+0.118034379 container died d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e6287dde528a9927502ae0ca96830833795336ff2cf9c96a3a2200218e0338c3-merged.mount: Deactivated successfully.
Jan 31 03:46:12 np0005603663 podman[256416]: 2026-01-31 08:46:12.245531972 +0000 UTC m=+0.151096956 container remove d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:46:12 np0005603663 systemd[1]: libpod-conmon-d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59.scope: Deactivated successfully.
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.385698553 +0000 UTC m=+0.041883059 container create 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:46:12 np0005603663 systemd[1]: Started libpod-conmon-1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a.scope.
Jan 31 03:46:12 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:46:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:12 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.366517404 +0000 UTC m=+0.022701880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.47187702 +0000 UTC m=+0.128061506 container init 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.480722783 +0000 UTC m=+0.136907239 container start 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.48375774 +0000 UTC m=+0.139942206 container attach 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]: {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:    "0": [
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:        {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "devices": [
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "/dev/loop3"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            ],
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_name": "ceph_lv0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_size": "21470642176",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "name": "ceph_lv0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "tags": {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cluster_name": "ceph",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.crush_device_class": "",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.encrypted": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.objectstore": "bluestore",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osd_id": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.type": "block",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.vdo": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.with_tpm": "0"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            },
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "type": "block",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "vg_name": "ceph_vg0"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:        }
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:    ],
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:    "1": [
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:        {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "devices": [
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "/dev/loop4"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            ],
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_name": "ceph_lv1",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_size": "21470642176",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "name": "ceph_lv1",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "tags": {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cluster_name": "ceph",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.crush_device_class": "",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.encrypted": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.objectstore": "bluestore",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osd_id": "1",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.type": "block",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.vdo": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.with_tpm": "0"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            },
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "type": "block",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "vg_name": "ceph_vg1"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:        }
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:    ],
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:    "2": [
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:        {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "devices": [
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "/dev/loop5"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            ],
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_name": "ceph_lv2",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_size": "21470642176",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "name": "ceph_lv2",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "tags": {
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.cluster_name": "ceph",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.crush_device_class": "",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.encrypted": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.objectstore": "bluestore",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osd_id": "2",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.type": "block",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.vdo": "0",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:                "ceph.with_tpm": "0"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            },
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "type": "block",
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:            "vg_name": "ceph_vg2"
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:        }
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]:    ]
Jan 31 03:46:12 np0005603663 vibrant_carson[256474]: }
Jan 31 03:46:12 np0005603663 systemd[1]: libpod-1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a.scope: Deactivated successfully.
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.722469852 +0000 UTC m=+0.378654308 container died 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 03:46:12 np0005603663 systemd[1]: var-lib-containers-storage-overlay-2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334-merged.mount: Deactivated successfully.
Jan 31 03:46:12 np0005603663 podman[256457]: 2026-01-31 08:46:12.760005396 +0000 UTC m=+0.416189852 container remove 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:46:12 np0005603663 systemd[1]: libpod-conmon-1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a.scope: Deactivated successfully.
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.195643555 +0000 UTC m=+0.064910119 container create 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.151213843 +0000 UTC m=+0.020480417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:46:13 np0005603663 systemd[1]: Started libpod-conmon-656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198.scope.
Jan 31 03:46:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:46:13 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5958 writes, 26K keys, 5958 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5958 writes, 5958 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1331 writes, 6035 keys, 1331 commit groups, 1.0 writes per commit group, ingest: 8.83 MB, 0.01 MB/s#012Interval WAL: 1331 writes, 1331 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.5      1.77              0.07        15    0.118       0      0       0.0       0.0#012  L6      1/0    7.54 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     41.3     33.9      3.13              0.27        14    0.223     65K   7757       0.0       0.0#012 Sum      1/0    7.54 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     26.4     28.0      4.89              0.34        29    0.169     65K   7757       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     48.6     48.3      0.84              0.09         8    0.105     21K   2566       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     41.3     33.9      3.13              0.27        14    0.223     65K   7757       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.5      1.76              0.07        14    0.126       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 4.9 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 304.00 MB usage: 14.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000154 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(873,13.69 MB,4.5045%) FilterBlock(30,184.98 KB,0.0594239%) IndexBlock(30,341.11 KB,0.109577%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:46:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.318080439 +0000 UTC m=+0.187347013 container init 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.326370456 +0000 UTC m=+0.195637050 container start 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 03:46:13 np0005603663 eloquent_bouman[256572]: 167 167
Jan 31 03:46:13 np0005603663 systemd[1]: libpod-656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198.scope: Deactivated successfully.
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.339542333 +0000 UTC m=+0.208808907 container attach 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.339870122 +0000 UTC m=+0.209136676 container died 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:46:13 np0005603663 systemd[1]: var-lib-containers-storage-overlay-6decfe61cde12582084cfb6e143a8ebb0713e9af75fd1d5c9937d037e6bd31d5-merged.mount: Deactivated successfully.
Jan 31 03:46:13 np0005603663 systemd[1]: session-55.scope: Deactivated successfully.
Jan 31 03:46:13 np0005603663 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Jan 31 03:46:13 np0005603663 systemd-logind[793]: Removed session 55.
Jan 31 03:46:13 np0005603663 podman[256556]: 2026-01-31 08:46:13.375405949 +0000 UTC m=+0.244672523 container remove 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:46:13 np0005603663 systemd[1]: libpod-conmon-656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198.scope: Deactivated successfully.
Jan 31 03:46:13 np0005603663 podman[256596]: 2026-01-31 08:46:13.517511516 +0000 UTC m=+0.041093187 container create 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 03:46:13 np0005603663 systemd[1]: Started libpod-conmon-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope.
Jan 31 03:46:13 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:46:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:13 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:13 np0005603663 podman[256596]: 2026-01-31 08:46:13.496437553 +0000 UTC m=+0.020019274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:46:13 np0005603663 podman[256596]: 2026-01-31 08:46:13.595729535 +0000 UTC m=+0.119311226 container init 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:46:13 np0005603663 podman[256596]: 2026-01-31 08:46:13.602818638 +0000 UTC m=+0.126400309 container start 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:46:13 np0005603663 podman[256596]: 2026-01-31 08:46:13.605773363 +0000 UTC m=+0.129355034 container attach 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:46:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:14 np0005603663 lvm[256690]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:46:14 np0005603663 lvm[256691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:46:14 np0005603663 lvm[256690]: VG ceph_vg0 finished
Jan 31 03:46:14 np0005603663 lvm[256691]: VG ceph_vg1 finished
Jan 31 03:46:14 np0005603663 lvm[256693]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:46:14 np0005603663 lvm[256693]: VG ceph_vg2 finished
Jan 31 03:46:14 np0005603663 blissful_blackwell[256612]: {}
Jan 31 03:46:14 np0005603663 systemd[1]: libpod-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope: Deactivated successfully.
Jan 31 03:46:14 np0005603663 systemd[1]: libpod-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope: Consumed 1.024s CPU time.
Jan 31 03:46:14 np0005603663 podman[256596]: 2026-01-31 08:46:14.304979234 +0000 UTC m=+0.828560945 container died 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 03:46:14 np0005603663 systemd[1]: var-lib-containers-storage-overlay-df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1-merged.mount: Deactivated successfully.
Jan 31 03:46:14 np0005603663 podman[256596]: 2026-01-31 08:46:14.349683474 +0000 UTC m=+0.873265145 container remove 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:46:14 np0005603663 systemd[1]: libpod-conmon-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope: Deactivated successfully.
Jan 31 03:46:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:46:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:46:14 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:46:14 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:46:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:46:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:46:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:46:17.906 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:46:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:46:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:46:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130115664' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:46:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:46:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130115664' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:46:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:21 np0005603663 nova_compute[238824]: 2026-01-31 08:46:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:23 np0005603663 nova_compute[238824]: 2026-01-31 08:46:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:23 np0005603663 nova_compute[238824]: 2026-01-31 08:46:23.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:46:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:25 np0005603663 nova_compute[238824]: 2026-01-31 08:46:25.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:25 np0005603663 nova_compute[238824]: 2026-01-31 08:46:25.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:28 np0005603663 nova_compute[238824]: 2026-01-31 08:46:28.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:29 np0005603663 nova_compute[238824]: 2026-01-31 08:46:29.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:29 np0005603663 nova_compute[238824]: 2026-01-31 08:46:29.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:46:29 np0005603663 nova_compute[238824]: 2026-01-31 08:46:29.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:46:29 np0005603663 nova_compute[238824]: 2026-01-31 08:46:29.354 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:46:29 np0005603663 nova_compute[238824]: 2026-01-31 08:46:29.354 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:30 np0005603663 nova_compute[238824]: 2026-01-31 08:46:30.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:46:31
Jan 31 03:46:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:46:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:46:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta']
Jan 31 03:46:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:46:32 np0005603663 podman[256736]: 2026-01-31 08:46:32.17560632 +0000 UTC m=+0.056642690 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:46:32 np0005603663 podman[256735]: 2026-01-31 08:46:32.206325024 +0000 UTC m=+0.091269926 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.373 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.374 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:46:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762862891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:46:32 np0005603663 nova_compute[238824]: 2026-01-31 08:46:32.876 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.047 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.048 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5105MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.049 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.049 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.119 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.119 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.136 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:46:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:46:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601642418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.675 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.679 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.695 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.697 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:46:33 np0005603663 nova_compute[238824]: 2026-01-31 08:46:33.697 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:34 np0005603663 nova_compute[238824]: 2026-01-31 08:46:34.693 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:40 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:46:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:45 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:50 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:46:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:00 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:03 np0005603663 podman[256826]: 2026-01-31 08:47:03.174961903 +0000 UTC m=+0.068306196 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:47:03 np0005603663 podman[256825]: 2026-01-31 08:47:03.204135472 +0000 UTC m=+0.098681049 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:47:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:05 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:10 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.423009106 +0000 UTC m=+0.044988145 container create 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:47:15 np0005603663 systemd[1]: Started libpod-conmon-31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4.scope.
Jan 31 03:47:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.492963437 +0000 UTC m=+0.114942496 container init 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.398767579 +0000 UTC m=+0.020746638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.499508605 +0000 UTC m=+0.121487634 container start 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.502702637 +0000 UTC m=+0.124681676 container attach 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:47:15 np0005603663 angry_goldberg[257029]: 167 167
Jan 31 03:47:15 np0005603663 systemd[1]: libpod-31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4.scope: Deactivated successfully.
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.505491607 +0000 UTC m=+0.127470646 container died 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:47:15 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9daa876ec012f9cdf64f3bdc54074bc32e4b21d8dabdf80bdca2e2f7e23b2b3a-merged.mount: Deactivated successfully.
Jan 31 03:47:15 np0005603663 podman[257013]: 2026-01-31 08:47:15.545889189 +0000 UTC m=+0.167868228 container remove 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:47:15 np0005603663 systemd[1]: libpod-conmon-31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4.scope: Deactivated successfully.
Jan 31 03:47:15 np0005603663 podman[257054]: 2026-01-31 08:47:15.674286502 +0000 UTC m=+0.035732359 container create de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:47:15 np0005603663 systemd[1]: Started libpod-conmon-de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68.scope.
Jan 31 03:47:15 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:47:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:15 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:15 np0005603663 podman[257054]: 2026-01-31 08:47:15.743214805 +0000 UTC m=+0.104660662 container init de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:47:15 np0005603663 podman[257054]: 2026-01-31 08:47:15.750081932 +0000 UTC m=+0.111527789 container start de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 03:47:15 np0005603663 podman[257054]: 2026-01-31 08:47:15.753095509 +0000 UTC m=+0.114541466 container attach de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:47:15 np0005603663 podman[257054]: 2026-01-31 08:47:15.656772058 +0000 UTC m=+0.018217935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:47:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:15 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:47:16 np0005603663 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6532 writes, 26K keys, 6532 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6532 writes, 1295 syncs, 5.04 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 346 writes, 673 keys, 346 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 346 writes, 170 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:47:16 np0005603663 stoic_yalow[257071]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:47:16 np0005603663 stoic_yalow[257071]: --> All data devices are unavailable
Jan 31 03:47:16 np0005603663 systemd[1]: libpod-de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68.scope: Deactivated successfully.
Jan 31 03:47:16 np0005603663 podman[257054]: 2026-01-31 08:47:16.15227979 +0000 UTC m=+0.513725647 container died de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:47:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1-merged.mount: Deactivated successfully.
Jan 31 03:47:16 np0005603663 podman[257054]: 2026-01-31 08:47:16.191789097 +0000 UTC m=+0.553234954 container remove de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:47:16 np0005603663 systemd[1]: libpod-conmon-de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68.scope: Deactivated successfully.
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.59207924 +0000 UTC m=+0.035198424 container create 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:47:16 np0005603663 systemd[1]: Started libpod-conmon-432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181.scope.
Jan 31 03:47:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.660060365 +0000 UTC m=+0.103179569 container init 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.664220395 +0000 UTC m=+0.107339579 container start 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.666899622 +0000 UTC m=+0.110018806 container attach 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:47:16 np0005603663 nostalgic_cohen[257183]: 167 167
Jan 31 03:47:16 np0005603663 systemd[1]: libpod-432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181.scope: Deactivated successfully.
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.668801336 +0000 UTC m=+0.111920530 container died 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.576155502 +0000 UTC m=+0.019274716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:47:16 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7b65ceaa0dfb2331a71630f355eac743b4e2925d1c9138511179c6bdbcc3806c-merged.mount: Deactivated successfully.
Jan 31 03:47:16 np0005603663 podman[257167]: 2026-01-31 08:47:16.710385302 +0000 UTC m=+0.153504486 container remove 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:47:16 np0005603663 systemd[1]: libpod-conmon-432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181.scope: Deactivated successfully.
Jan 31 03:47:16 np0005603663 podman[257209]: 2026-01-31 08:47:16.834685357 +0000 UTC m=+0.033184635 container create c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:47:16 np0005603663 systemd[1]: Started libpod-conmon-c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38.scope.
Jan 31 03:47:16 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:47:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:16 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:16 np0005603663 podman[257209]: 2026-01-31 08:47:16.82017885 +0000 UTC m=+0.018678158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:47:16 np0005603663 podman[257209]: 2026-01-31 08:47:16.93455389 +0000 UTC m=+0.133053208 container init c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:47:16 np0005603663 podman[257209]: 2026-01-31 08:47:16.940436339 +0000 UTC m=+0.138935617 container start c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:47:16 np0005603663 podman[257209]: 2026-01-31 08:47:16.946800012 +0000 UTC m=+0.145299300 container attach c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]: {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:    "0": [
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:        {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "devices": [
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "/dev/loop3"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            ],
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_name": "ceph_lv0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_size": "21470642176",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "name": "ceph_lv0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "tags": {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cluster_name": "ceph",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.crush_device_class": "",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.encrypted": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.objectstore": "bluestore",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osd_id": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.type": "block",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.vdo": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.with_tpm": "0"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            },
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "type": "block",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "vg_name": "ceph_vg0"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:        }
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:    ],
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:    "1": [
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:        {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "devices": [
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "/dev/loop4"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            ],
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_name": "ceph_lv1",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_size": "21470642176",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "name": "ceph_lv1",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "tags": {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cluster_name": "ceph",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.crush_device_class": "",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.encrypted": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.objectstore": "bluestore",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osd_id": "1",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.type": "block",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.vdo": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.with_tpm": "0"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            },
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "type": "block",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "vg_name": "ceph_vg1"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:        }
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:    ],
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:    "2": [
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:        {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "devices": [
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "/dev/loop5"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            ],
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_name": "ceph_lv2",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_size": "21470642176",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "name": "ceph_lv2",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "tags": {
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.cluster_name": "ceph",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.crush_device_class": "",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.encrypted": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.objectstore": "bluestore",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osd_id": "2",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.type": "block",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.vdo": "0",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:                "ceph.with_tpm": "0"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            },
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "type": "block",
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:            "vg_name": "ceph_vg2"
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:        }
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]:    ]
Jan 31 03:47:17 np0005603663 hopeful_tesla[257226]: }
Jan 31 03:47:17 np0005603663 systemd[1]: libpod-c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38.scope: Deactivated successfully.
Jan 31 03:47:17 np0005603663 podman[257209]: 2026-01-31 08:47:17.218174907 +0000 UTC m=+0.416674185 container died c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:47:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d-merged.mount: Deactivated successfully.
Jan 31 03:47:17 np0005603663 podman[257209]: 2026-01-31 08:47:17.258308432 +0000 UTC m=+0.456807710 container remove c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:47:17 np0005603663 systemd[1]: libpod-conmon-c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38.scope: Deactivated successfully.
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.666890523 +0000 UTC m=+0.040901937 container create 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:47:17 np0005603663 systemd[1]: Started libpod-conmon-6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a.scope.
Jan 31 03:47:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.735841297 +0000 UTC m=+0.109852741 container init 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.741099878 +0000 UTC m=+0.115111302 container start 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.648321479 +0000 UTC m=+0.022332923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:47:17 np0005603663 tender_edison[257324]: 167 167
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.744239938 +0000 UTC m=+0.118251372 container attach 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:47:17 np0005603663 systemd[1]: libpod-6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a.scope: Deactivated successfully.
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.74502145 +0000 UTC m=+0.119032874 container died 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:47:17 np0005603663 systemd[1]: var-lib-containers-storage-overlay-8dbdab9b7e5a713f8af88d5a15a5dd10a10ba57949f0b464c36459bc07614d4d-merged.mount: Deactivated successfully.
Jan 31 03:47:17 np0005603663 podman[257307]: 2026-01-31 08:47:17.78222114 +0000 UTC m=+0.156232564 container remove 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:47:17 np0005603663 systemd[1]: libpod-conmon-6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a.scope: Deactivated successfully.
Jan 31 03:47:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:17 np0005603663 podman[257350]: 2026-01-31 08:47:17.899972367 +0000 UTC m=+0.033102943 container create 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:47:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:47:17.907 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:47:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:47:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:17 np0005603663 systemd[1]: Started libpod-conmon-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope.
Jan 31 03:47:17 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:47:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:17 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:17 np0005603663 podman[257350]: 2026-01-31 08:47:17.968748885 +0000 UTC m=+0.101879491 container init 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:47:17 np0005603663 podman[257350]: 2026-01-31 08:47:17.974229843 +0000 UTC m=+0.107360389 container start 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 03:47:17 np0005603663 podman[257350]: 2026-01-31 08:47:17.978024302 +0000 UTC m=+0.111154898 container attach 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:47:17 np0005603663 podman[257350]: 2026-01-31 08:47:17.885427599 +0000 UTC m=+0.018558165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1529098669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1529098669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:47:18 np0005603663 lvm[257444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:47:18 np0005603663 lvm[257445]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:47:18 np0005603663 lvm[257444]: VG ceph_vg0 finished
Jan 31 03:47:18 np0005603663 lvm[257445]: VG ceph_vg1 finished
Jan 31 03:47:18 np0005603663 lvm[257447]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:47:18 np0005603663 lvm[257447]: VG ceph_vg2 finished
Jan 31 03:47:18 np0005603663 ecstatic_haslett[257366]: {}
Jan 31 03:47:18 np0005603663 systemd[1]: libpod-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope: Deactivated successfully.
Jan 31 03:47:18 np0005603663 systemd[1]: libpod-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope: Consumed 1.021s CPU time.
Jan 31 03:47:18 np0005603663 podman[257350]: 2026-01-31 08:47:18.792298602 +0000 UTC m=+0.925429138 container died 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:47:18 np0005603663 systemd[1]: var-lib-containers-storage-overlay-08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223-merged.mount: Deactivated successfully.
Jan 31 03:47:18 np0005603663 podman[257350]: 2026-01-31 08:47:18.841964581 +0000 UTC m=+0.975095117 container remove 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:47:18 np0005603663 systemd[1]: libpod-conmon-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope: Deactivated successfully.
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:47:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:47:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:47:19 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:47:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:47:21 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 7939 writes, 31K keys, 7939 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7939 writes, 1746 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 313 writes, 601 keys, 313 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s#012Interval WAL: 313 writes, 149 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:47:21 np0005603663 nova_compute[238824]: 2026-01-31 08:47:21.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:25 np0005603663 nova_compute[238824]: 2026-01-31 08:47:25.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:25 np0005603663 nova_compute[238824]: 2026-01-31 08:47:25.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:47:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:26 np0005603663 nova_compute[238824]: 2026-01-31 08:47:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:27 np0005603663 nova_compute[238824]: 2026-01-31 08:47:27.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:47:27 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.8 total, 600.0 interval#012Cumulative writes: 6379 writes, 26K keys, 6379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6379 writes, 1179 syncs, 5.41 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 245 writes, 416 keys, 245 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s#012Interval WAL: 245 writes, 117 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:47:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:29 np0005603663 nova_compute[238824]: 2026-01-31 08:47:29.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:30 np0005603663 nova_compute[238824]: 2026-01-31 08:47:30.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:30 np0005603663 nova_compute[238824]: 2026-01-31 08:47:30.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:47:30 np0005603663 nova_compute[238824]: 2026-01-31 08:47:30.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:47:30 np0005603663 nova_compute[238824]: 2026-01-31 08:47:30.356 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:47:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:47:31
Jan 31 03:47:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:47:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:47:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Jan 31 03:47:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 03:47:32 np0005603663 nova_compute[238824]: 2026-01-31 08:47:32.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.365 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:47:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434740814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.868 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.994 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.995 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5072MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.996 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:33 np0005603663 nova_compute[238824]: 2026-01-31 08:47:33.996 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.054 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.054 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.071 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:34 np0005603663 podman[257513]: 2026-01-31 08:47:34.156963203 +0000 UTC m=+0.050053160 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:47:34 np0005603663 podman[257512]: 2026-01-31 08:47:34.179775729 +0000 UTC m=+0.073288199 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 31 03:47:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:47:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144545224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.602 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.608 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.754 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.756 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:47:34 np0005603663 nova_compute[238824]: 2026-01-31 08:47:34.756 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:35 np0005603663 nova_compute[238824]: 2026-01-31 08:47:35.751 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.923701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849257923749, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2050, "num_deletes": 251, "total_data_size": 3454811, "memory_usage": 3515760, "flush_reason": "Manual Compaction"}
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849257952762, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3388061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25729, "largest_seqno": 27778, "table_properties": {"data_size": 3378673, "index_size": 5946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18646, "raw_average_key_size": 20, "raw_value_size": 3360081, "raw_average_value_size": 3616, "num_data_blocks": 264, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849031, "oldest_key_time": 1769849031, "file_creation_time": 1769849257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 29157 microseconds, and 8872 cpu microseconds.
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.952854) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3388061 bytes OK
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.952888) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.956185) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.956218) EVENT_LOG_v1 {"time_micros": 1769849257956207, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.956283) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3446234, prev total WAL file size 3446234, number of live WAL files 2.
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.957889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3308KB)], [59(7718KB)]
Jan 31 03:47:37 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849257957994, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11292250, "oldest_snapshot_seqno": -1}
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5150 keys, 9454183 bytes, temperature: kUnknown
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849258058753, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9454183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9417686, "index_size": 22499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 127884, "raw_average_key_size": 24, "raw_value_size": 9322525, "raw_average_value_size": 1810, "num_data_blocks": 930, "num_entries": 5150, "num_filter_entries": 5150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.058976) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9454183 bytes
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.060622) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.0 rd, 93.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5664, records dropped: 514 output_compression: NoCompression
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.060640) EVENT_LOG_v1 {"time_micros": 1769849258060631, "job": 32, "event": "compaction_finished", "compaction_time_micros": 100827, "compaction_time_cpu_micros": 27815, "output_level": 6, "num_output_files": 1, "total_output_size": 9454183, "num_input_records": 5664, "num_output_records": 5150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849258061328, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849258062455, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.957708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:38 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:47:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:47:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:05 np0005603663 podman[257578]: 2026-01-31 08:48:05.167987553 +0000 UTC m=+0.051130572 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 03:48:05 np0005603663 podman[257577]: 2026-01-31 08:48:05.194541797 +0000 UTC m=+0.077304205 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:48:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:48:17.909 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:48:17.909 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:48:17.909 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:48:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4277259967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:48:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:48:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4277259967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:48:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:48:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:19 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:48:19 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:20 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:48:20 np0005603663 podman[257834]: 2026-01-31 08:48:20.442219895 +0000 UTC m=+0.025874795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:48:20 np0005603663 podman[257834]: 2026-01-31 08:48:20.5765977 +0000 UTC m=+0.160252540 container create 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:48:20 np0005603663 systemd[1]: Started libpod-conmon-9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42.scope.
Jan 31 03:48:20 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:48:20 np0005603663 podman[257834]: 2026-01-31 08:48:20.738885498 +0000 UTC m=+0.322540388 container init 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:48:20 np0005603663 podman[257834]: 2026-01-31 08:48:20.74802502 +0000 UTC m=+0.331679810 container start 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:48:20 np0005603663 gifted_lehmann[257850]: 167 167
Jan 31 03:48:20 np0005603663 systemd[1]: libpod-9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42.scope: Deactivated successfully.
Jan 31 03:48:20 np0005603663 podman[257834]: 2026-01-31 08:48:20.763118165 +0000 UTC m=+0.346772975 container attach 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:48:20 np0005603663 podman[257834]: 2026-01-31 08:48:20.763973429 +0000 UTC m=+0.347628239 container died 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:20 np0005603663 systemd[1]: var-lib-containers-storage-overlay-584831878db90c34905d5c3aa0c914c8ff443951e49d802e6b04ce7db08a4668-merged.mount: Deactivated successfully.
Jan 31 03:48:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:21 np0005603663 podman[257834]: 2026-01-31 08:48:21.085227549 +0000 UTC m=+0.668882339 container remove 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:48:21 np0005603663 systemd[1]: libpod-conmon-9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42.scope: Deactivated successfully.
Jan 31 03:48:21 np0005603663 podman[257874]: 2026-01-31 08:48:21.20173961 +0000 UTC m=+0.019123361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:48:21 np0005603663 nova_compute[238824]: 2026-01-31 08:48:21.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:21 np0005603663 podman[257874]: 2026-01-31 08:48:21.391355064 +0000 UTC m=+0.208738795 container create 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:21 np0005603663 systemd[1]: Started libpod-conmon-6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127.scope.
Jan 31 03:48:21 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:48:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:21 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:21 np0005603663 podman[257874]: 2026-01-31 08:48:21.580494574 +0000 UTC m=+0.397878395 container init 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:48:21 np0005603663 podman[257874]: 2026-01-31 08:48:21.586776485 +0000 UTC m=+0.404160256 container start 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:48:21 np0005603663 podman[257874]: 2026-01-31 08:48:21.612205166 +0000 UTC m=+0.429588897 container attach 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:48:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:21 np0005603663 compassionate_jackson[257890]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:48:21 np0005603663 compassionate_jackson[257890]: --> All data devices are unavailable
Jan 31 03:48:21 np0005603663 systemd[1]: libpod-6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127.scope: Deactivated successfully.
Jan 31 03:48:21 np0005603663 podman[257874]: 2026-01-31 08:48:21.99374715 +0000 UTC m=+0.811130881 container died 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 03:48:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c-merged.mount: Deactivated successfully.
Jan 31 03:48:22 np0005603663 podman[257874]: 2026-01-31 08:48:22.055435544 +0000 UTC m=+0.872819265 container remove 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:48:22 np0005603663 systemd[1]: libpod-conmon-6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127.scope: Deactivated successfully.
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.489015855 +0000 UTC m=+0.048749224 container create 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:48:22 np0005603663 systemd[1]: Started libpod-conmon-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope.
Jan 31 03:48:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.458179088 +0000 UTC m=+0.017912447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.56049032 +0000 UTC m=+0.120223669 container init 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.566195274 +0000 UTC m=+0.125928613 container start 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 03:48:22 np0005603663 eloquent_almeida[258002]: 167 167
Jan 31 03:48:22 np0005603663 systemd[1]: libpod-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope: Deactivated successfully.
Jan 31 03:48:22 np0005603663 conmon[258002]: conmon 15c9173e0179c2dad5a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope/container/memory.events
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.573318019 +0000 UTC m=+0.133051348 container attach 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.574309538 +0000 UTC m=+0.134042867 container died 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 03:48:22 np0005603663 systemd[1]: var-lib-containers-storage-overlay-9b6000616a898a1a3a1da4cc8440deee821b40f473d4e7f5f8820d045adfafd9-merged.mount: Deactivated successfully.
Jan 31 03:48:22 np0005603663 podman[257986]: 2026-01-31 08:48:22.656114311 +0000 UTC m=+0.215847630 container remove 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:48:22 np0005603663 systemd[1]: libpod-conmon-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope: Deactivated successfully.
Jan 31 03:48:22 np0005603663 podman[258026]: 2026-01-31 08:48:22.788393815 +0000 UTC m=+0.042077371 container create 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:48:22 np0005603663 systemd[1]: Started libpod-conmon-4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01.scope.
Jan 31 03:48:22 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:48:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:22 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:22 np0005603663 podman[258026]: 2026-01-31 08:48:22.766135425 +0000 UTC m=+0.019819011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:48:22 np0005603663 podman[258026]: 2026-01-31 08:48:22.865830933 +0000 UTC m=+0.119514509 container init 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:48:22 np0005603663 podman[258026]: 2026-01-31 08:48:22.871412683 +0000 UTC m=+0.125096239 container start 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:48:22 np0005603663 podman[258026]: 2026-01-31 08:48:22.882501742 +0000 UTC m=+0.136185298 container attach 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]: {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:    "0": [
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:        {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "devices": [
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "/dev/loop3"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            ],
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_name": "ceph_lv0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_size": "21470642176",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "name": "ceph_lv0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "tags": {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cluster_name": "ceph",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.crush_device_class": "",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.encrypted": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.objectstore": "bluestore",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osd_id": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.type": "block",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.vdo": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.with_tpm": "0"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            },
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "type": "block",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "vg_name": "ceph_vg0"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:        }
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:    ],
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:    "1": [
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:        {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "devices": [
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "/dev/loop4"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            ],
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_name": "ceph_lv1",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_size": "21470642176",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "name": "ceph_lv1",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "tags": {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cluster_name": "ceph",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.crush_device_class": "",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.encrypted": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.objectstore": "bluestore",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osd_id": "1",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.type": "block",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.vdo": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.with_tpm": "0"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            },
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "type": "block",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "vg_name": "ceph_vg1"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:        }
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:    ],
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:    "2": [
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:        {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "devices": [
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "/dev/loop5"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            ],
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_name": "ceph_lv2",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_size": "21470642176",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "name": "ceph_lv2",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "tags": {
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.cluster_name": "ceph",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.crush_device_class": "",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.encrypted": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.objectstore": "bluestore",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osd_id": "2",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.type": "block",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.vdo": "0",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:                "ceph.with_tpm": "0"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            },
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "type": "block",
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:            "vg_name": "ceph_vg2"
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:        }
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]:    ]
Jan 31 03:48:23 np0005603663 quirky_lumiere[258042]: }
Jan 31 03:48:23 np0005603663 systemd[1]: libpod-4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01.scope: Deactivated successfully.
Jan 31 03:48:23 np0005603663 podman[258026]: 2026-01-31 08:48:23.143093127 +0000 UTC m=+0.396776683 container died 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:48:23 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622-merged.mount: Deactivated successfully.
Jan 31 03:48:23 np0005603663 podman[258026]: 2026-01-31 08:48:23.447442761 +0000 UTC m=+0.701126307 container remove 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:23 np0005603663 systemd[1]: libpod-conmon-4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01.scope: Deactivated successfully.
Jan 31 03:48:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:23 np0005603663 podman[258127]: 2026-01-31 08:48:23.816286928 +0000 UTC m=+0.018975086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:48:23 np0005603663 podman[258127]: 2026-01-31 08:48:23.919459466 +0000 UTC m=+0.122147624 container create dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:48:23 np0005603663 systemd[1]: Started libpod-conmon-dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6.scope.
Jan 31 03:48:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:48:24 np0005603663 podman[258127]: 2026-01-31 08:48:24.089516877 +0000 UTC m=+0.292205135 container init dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:48:24 np0005603663 podman[258127]: 2026-01-31 08:48:24.097833406 +0000 UTC m=+0.300521564 container start dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:48:24 np0005603663 happy_vaughan[258143]: 167 167
Jan 31 03:48:24 np0005603663 systemd[1]: libpod-dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6.scope: Deactivated successfully.
Jan 31 03:48:24 np0005603663 podman[258127]: 2026-01-31 08:48:24.153616431 +0000 UTC m=+0.356304629 container attach dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:48:24 np0005603663 podman[258127]: 2026-01-31 08:48:24.154752843 +0000 UTC m=+0.357441031 container died dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:24 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d7282ff69586829ef95f6f6f3a8a9d630b1c1bc855d98a9157fad66082359075-merged.mount: Deactivated successfully.
Jan 31 03:48:24 np0005603663 podman[258127]: 2026-01-31 08:48:24.341711961 +0000 UTC m=+0.544400159 container remove dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:48:24 np0005603663 systemd[1]: libpod-conmon-dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6.scope: Deactivated successfully.
Jan 31 03:48:24 np0005603663 podman[258166]: 2026-01-31 08:48:24.4922356 +0000 UTC m=+0.038571690 container create 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:48:24 np0005603663 systemd[1]: Started libpod-conmon-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope.
Jan 31 03:48:24 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:48:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:24 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:24 np0005603663 podman[258166]: 2026-01-31 08:48:24.562866111 +0000 UTC m=+0.109202231 container init 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:48:24 np0005603663 podman[258166]: 2026-01-31 08:48:24.569260555 +0000 UTC m=+0.115596635 container start 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:48:24 np0005603663 podman[258166]: 2026-01-31 08:48:24.475524049 +0000 UTC m=+0.021860159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:48:24 np0005603663 podman[258166]: 2026-01-31 08:48:24.574845966 +0000 UTC m=+0.121182056 container attach 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:48:25 np0005603663 lvm[258259]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:48:25 np0005603663 lvm[258259]: VG ceph_vg0 finished
Jan 31 03:48:25 np0005603663 lvm[258262]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:48:25 np0005603663 lvm[258262]: VG ceph_vg1 finished
Jan 31 03:48:25 np0005603663 lvm[258264]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:48:25 np0005603663 lvm[258264]: VG ceph_vg2 finished
Jan 31 03:48:25 np0005603663 wizardly_yonath[258182]: {}
Jan 31 03:48:25 np0005603663 systemd[1]: libpod-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope: Deactivated successfully.
Jan 31 03:48:25 np0005603663 systemd[1]: libpod-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope: Consumed 1.025s CPU time.
Jan 31 03:48:25 np0005603663 podman[258166]: 2026-01-31 08:48:25.331615152 +0000 UTC m=+0.877951262 container died 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:48:25 np0005603663 systemd[1]: var-lib-containers-storage-overlay-428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a-merged.mount: Deactivated successfully.
Jan 31 03:48:25 np0005603663 podman[258166]: 2026-01-31 08:48:25.587615115 +0000 UTC m=+1.133951205 container remove 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:48:25 np0005603663 systemd[1]: libpod-conmon-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope: Deactivated successfully.
Jan 31 03:48:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:48:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:25 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:48:25 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:26 np0005603663 nova_compute[238824]: 2026-01-31 08:48:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:26 np0005603663 nova_compute[238824]: 2026-01-31 08:48:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:26 np0005603663 nova_compute[238824]: 2026-01-31 08:48:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:48:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:26 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:48:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:29 np0005603663 nova_compute[238824]: 2026-01-31 08:48:29.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:29 np0005603663 nova_compute[238824]: 2026-01-31 08:48:29.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:30 np0005603663 nova_compute[238824]: 2026-01-31 08:48:30.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:48:31
Jan 31 03:48:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:48:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:48:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Jan 31 03:48:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:48:32 np0005603663 nova_compute[238824]: 2026-01-31 08:48:32.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:32 np0005603663 nova_compute[238824]: 2026-01-31 08:48:32.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:48:32 np0005603663 nova_compute[238824]: 2026-01-31 08:48:32.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:48:32 np0005603663 nova_compute[238824]: 2026-01-31 08:48:32.354 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:48:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:48:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/933584053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:48:33 np0005603663 nova_compute[238824]: 2026-01-31 08:48:33.937 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.054 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.055 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.056 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.056 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.154 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.154 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.175 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:48:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823023545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.679 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:34 np0005603663 nova_compute[238824]: 2026-01-31 08:48:34.684 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:35 np0005603663 nova_compute[238824]: 2026-01-31 08:48:35.307 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:35 np0005603663 nova_compute[238824]: 2026-01-31 08:48:35.309 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:48:35 np0005603663 nova_compute[238824]: 2026-01-31 08:48:35.309 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:48:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:36 np0005603663 podman[258348]: 2026-01-31 08:48:36.169798109 +0000 UTC m=+0.060080789 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:48:36 np0005603663 podman[258347]: 2026-01-31 08:48:36.205091214 +0000 UTC m=+0.100199293 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 03:48:36 np0005603663 nova_compute[238824]: 2026-01-31 08:48:36.303 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:48:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:48:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:48:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:48:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 03:48:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:48:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 31 03:48:55 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 03:48:55 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 31 03:48:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 21 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Jan 31 03:48:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 21 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Jan 31 03:48:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.019517) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341019552, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 903, "num_deletes": 250, "total_data_size": 1290751, "memory_usage": 1311120, "flush_reason": "Manual Compaction"}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341030160, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 802201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27779, "largest_seqno": 28681, "table_properties": {"data_size": 798489, "index_size": 1428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9719, "raw_average_key_size": 20, "raw_value_size": 790553, "raw_average_value_size": 1682, "num_data_blocks": 65, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849259, "oldest_key_time": 1769849259, "file_creation_time": 1769849341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10711 microseconds, and 2230 cpu microseconds.
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.030222) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 802201 bytes OK
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.030244) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033142) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033154) EVENT_LOG_v1 {"time_micros": 1769849341033150, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1286376, prev total WAL file size 1286376, number of live WAL files 2.
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(783KB)], [62(9232KB)]
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341033593, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10256384, "oldest_snapshot_seqno": -1}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5139 keys, 7381180 bytes, temperature: kUnknown
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341067482, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7381180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7348450, "index_size": 18796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 127802, "raw_average_key_size": 24, "raw_value_size": 7257057, "raw_average_value_size": 1412, "num_data_blocks": 779, "num_entries": 5139, "num_filter_entries": 5139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.067687) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7381180 bytes
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.068920) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 302.1 rd, 217.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(22.0) write-amplify(9.2) OK, records in: 5620, records dropped: 481 output_compression: NoCompression
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.068937) EVENT_LOG_v1 {"time_micros": 1769849341068927, "job": 34, "event": "compaction_finished", "compaction_time_micros": 33955, "compaction_time_cpu_micros": 13358, "output_level": 6, "num_output_files": 1, "total_output_size": 7381180, "num_input_records": 5620, "num_output_records": 5139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341069139, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341070330, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:01 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 03:49:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 03:49:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 771 KiB/s wr, 7 op/s
Jan 31 03:49:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:07 np0005603663 podman[258391]: 2026-01-31 08:49:07.167241166 +0000 UTC m=+0.060496791 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 03:49:07 np0005603663 podman[258392]: 2026-01-31 08:49:07.175310018 +0000 UTC m=+0.066128933 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:49:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 31 03:49:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 31 03:49:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Jan 31 03:49:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:49:17.910 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:49:17.911 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:49:17.911 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:49:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800954654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:49:17 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:49:17 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800954654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:49:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:20 np0005603663 nova_compute[238824]: 2026-01-31 08:49:20.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:20 np0005603663 nova_compute[238824]: 2026-01-31 08:49:20.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:49:20 np0005603663 nova_compute[238824]: 2026-01-31 08:49:20.355 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:49:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:21 np0005603663 nova_compute[238824]: 2026-01-31 08:49:21.355 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:26 np0005603663 nova_compute[238824]: 2026-01-31 08:49:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:26 np0005603663 nova_compute[238824]: 2026-01-31 08:49:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:26 np0005603663 nova_compute[238824]: 2026-01-31 08:49:26.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:49:26 np0005603663 nova_compute[238824]: 2026-01-31 08:49:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:26 np0005603663 nova_compute[238824]: 2026-01-31 08:49:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:49:26 np0005603663 podman[258531]: 2026-01-31 08:49:26.374544526 +0000 UTC m=+0.052921633 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:49:26 np0005603663 podman[258531]: 2026-01-31 08:49:26.502575559 +0000 UTC m=+0.180952646 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:49:27 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:49:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:27 np0005603663 podman[258861]: 2026-01-31 08:49:27.916920368 +0000 UTC m=+0.036909472 container create bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:49:27 np0005603663 systemd[1]: Started libpod-conmon-bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45.scope.
Jan 31 03:49:27 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:49:27 np0005603663 podman[258861]: 2026-01-31 08:49:27.980996071 +0000 UTC m=+0.100985195 container init bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 03:49:27 np0005603663 podman[258861]: 2026-01-31 08:49:27.985319365 +0000 UTC m=+0.105308469 container start bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 03:49:27 np0005603663 podman[258861]: 2026-01-31 08:49:27.988103885 +0000 UTC m=+0.108093019 container attach bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:49:27 np0005603663 musing_wilson[258877]: 167 167
Jan 31 03:49:27 np0005603663 systemd[1]: libpod-bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45.scope: Deactivated successfully.
Jan 31 03:49:27 np0005603663 podman[258861]: 2026-01-31 08:49:27.990231836 +0000 UTC m=+0.110220950 container died bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 03:49:27 np0005603663 podman[258861]: 2026-01-31 08:49:27.901163875 +0000 UTC m=+0.021152999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:49:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-eec8ad7002a94afab82a50fe1e2e2df82c5116a2452d24edf4533bf723001403-merged.mount: Deactivated successfully.
Jan 31 03:49:28 np0005603663 podman[258861]: 2026-01-31 08:49:28.031526074 +0000 UTC m=+0.151515188 container remove bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:49:28 np0005603663 systemd[1]: libpod-conmon-bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45.scope: Deactivated successfully.
Jan 31 03:49:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 03:49:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:49:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:28 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.167230627 +0000 UTC m=+0.041760862 container create 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:49:28 np0005603663 systemd[1]: Started libpod-conmon-2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4.scope.
Jan 31 03:49:28 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:49:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:28 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.148745995 +0000 UTC m=+0.023276260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.255888867 +0000 UTC m=+0.130419122 container init 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.262305581 +0000 UTC m=+0.136835816 container start 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.265489263 +0000 UTC m=+0.140019518 container attach 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:49:28 np0005603663 epic_margulis[258920]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:49:28 np0005603663 epic_margulis[258920]: --> All data devices are unavailable
Jan 31 03:49:28 np0005603663 systemd[1]: libpod-2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4.scope: Deactivated successfully.
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.714523168 +0000 UTC m=+0.589053443 container died 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:49:28 np0005603663 systemd[1]: var-lib-containers-storage-overlay-a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57-merged.mount: Deactivated successfully.
Jan 31 03:49:28 np0005603663 podman[258903]: 2026-01-31 08:49:28.767601444 +0000 UTC m=+0.642131709 container remove 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:49:28 np0005603663 systemd[1]: libpod-conmon-2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4.scope: Deactivated successfully.
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.241300019 +0000 UTC m=+0.046123918 container create 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:49:29 np0005603663 systemd[1]: Started libpod-conmon-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope.
Jan 31 03:49:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.31087091 +0000 UTC m=+0.115694819 container init 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.220206082 +0000 UTC m=+0.025029961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.314989659 +0000 UTC m=+0.119813518 container start 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:49:29 np0005603663 elastic_matsumoto[259028]: 167 167
Jan 31 03:49:29 np0005603663 systemd[1]: libpod-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope: Deactivated successfully.
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.318038956 +0000 UTC m=+0.122862845 container attach 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:49:29 np0005603663 conmon[259028]: conmon 1949efd019c03dce5c38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope/container/memory.events
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.31920344 +0000 UTC m=+0.124027289 container died 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 03:49:29 np0005603663 systemd[1]: var-lib-containers-storage-overlay-45051a31c824280b692fad5e532d1ef40ca4b51477afc84ad2af789d38465bcd-merged.mount: Deactivated successfully.
Jan 31 03:49:29 np0005603663 podman[259012]: 2026-01-31 08:49:29.352163968 +0000 UTC m=+0.156987827 container remove 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:49:29 np0005603663 systemd[1]: libpod-conmon-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope: Deactivated successfully.
Jan 31 03:49:29 np0005603663 nova_compute[238824]: 2026-01-31 08:49:29.358 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:29 np0005603663 podman[259052]: 2026-01-31 08:49:29.483941988 +0000 UTC m=+0.037232092 container create 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:49:29 np0005603663 systemd[1]: Started libpod-conmon-3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842.scope.
Jan 31 03:49:29 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:49:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:29 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:29 np0005603663 podman[259052]: 2026-01-31 08:49:29.547068854 +0000 UTC m=+0.100358978 container init 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:49:29 np0005603663 podman[259052]: 2026-01-31 08:49:29.553132838 +0000 UTC m=+0.106422942 container start 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:49:29 np0005603663 podman[259052]: 2026-01-31 08:49:29.556505655 +0000 UTC m=+0.109795769 container attach 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:49:29 np0005603663 podman[259052]: 2026-01-31 08:49:29.468630818 +0000 UTC m=+0.021920962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]: {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:    "0": [
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:        {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "devices": [
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "/dev/loop3"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            ],
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_name": "ceph_lv0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_size": "21470642176",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "name": "ceph_lv0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "tags": {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cluster_name": "ceph",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.crush_device_class": "",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.encrypted": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.objectstore": "bluestore",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osd_id": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.type": "block",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.vdo": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.with_tpm": "0"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            },
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "type": "block",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "vg_name": "ceph_vg0"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:        }
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:    ],
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:    "1": [
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:        {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "devices": [
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "/dev/loop4"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            ],
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_name": "ceph_lv1",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_size": "21470642176",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "name": "ceph_lv1",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "tags": {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cluster_name": "ceph",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.crush_device_class": "",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.encrypted": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.objectstore": "bluestore",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osd_id": "1",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.type": "block",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.vdo": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.with_tpm": "0"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            },
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "type": "block",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "vg_name": "ceph_vg1"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:        }
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:    ],
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:    "2": [
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:        {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "devices": [
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "/dev/loop5"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            ],
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_name": "ceph_lv2",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_size": "21470642176",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "name": "ceph_lv2",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "tags": {
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.cluster_name": "ceph",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.crush_device_class": "",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.encrypted": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.objectstore": "bluestore",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osd_id": "2",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.type": "block",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.vdo": "0",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:                "ceph.with_tpm": "0"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            },
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "type": "block",
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:            "vg_name": "ceph_vg2"
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:        }
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]:    ]
Jan 31 03:49:29 np0005603663 friendly_visvesvaraya[259069]: }
Jan 31 03:49:29 np0005603663 systemd[1]: libpod-3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842.scope: Deactivated successfully.
Jan 31 03:49:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:29 np0005603663 podman[259078]: 2026-01-31 08:49:29.876874519 +0000 UTC m=+0.030330933 container died 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:49:29 np0005603663 systemd[1]: var-lib-containers-storage-overlay-7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0-merged.mount: Deactivated successfully.
Jan 31 03:49:29 np0005603663 podman[259078]: 2026-01-31 08:49:29.913571175 +0000 UTC m=+0.067027579 container remove 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 03:49:29 np0005603663 systemd[1]: libpod-conmon-3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842.scope: Deactivated successfully.
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.323214907 +0000 UTC m=+0.039736494 container create 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:49:30 np0005603663 systemd[1]: Started libpod-conmon-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope.
Jan 31 03:49:30 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.304738475 +0000 UTC m=+0.021260102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.406166883 +0000 UTC m=+0.122688470 container init 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.415844531 +0000 UTC m=+0.132366098 container start 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.419721943 +0000 UTC m=+0.136243540 container attach 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:49:30 np0005603663 frosty_allen[259171]: 167 167
Jan 31 03:49:30 np0005603663 systemd[1]: libpod-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope: Deactivated successfully.
Jan 31 03:49:30 np0005603663 conmon[259171]: conmon 3e5ab8946644e4776bed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope/container/memory.events
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.421694349 +0000 UTC m=+0.138215916 container died 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:49:30 np0005603663 systemd[1]: var-lib-containers-storage-overlay-d3b00f664b8cfab53da4e4df9e219ce684cad3bba3d96b168bb62d4eb3376402-merged.mount: Deactivated successfully.
Jan 31 03:49:30 np0005603663 podman[259155]: 2026-01-31 08:49:30.454850523 +0000 UTC m=+0.171372100 container remove 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:49:30 np0005603663 systemd[1]: libpod-conmon-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope: Deactivated successfully.
Jan 31 03:49:30 np0005603663 podman[259196]: 2026-01-31 08:49:30.598839644 +0000 UTC m=+0.043454590 container create 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:49:30 np0005603663 systemd[1]: Started libpod-conmon-9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c.scope.
Jan 31 03:49:30 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:49:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:30 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:30 np0005603663 podman[259196]: 2026-01-31 08:49:30.663532535 +0000 UTC m=+0.108147481 container init 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:49:30 np0005603663 podman[259196]: 2026-01-31 08:49:30.67275438 +0000 UTC m=+0.117369316 container start 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:49:30 np0005603663 podman[259196]: 2026-01-31 08:49:30.578821479 +0000 UTC m=+0.023436455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:49:30 np0005603663 podman[259196]: 2026-01-31 08:49:30.676101517 +0000 UTC m=+0.120716483 container attach 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:49:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:31 np0005603663 lvm[259291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:49:31 np0005603663 lvm[259294]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:49:31 np0005603663 lvm[259291]: VG ceph_vg0 finished
Jan 31 03:49:31 np0005603663 lvm[259294]: VG ceph_vg1 finished
Jan 31 03:49:31 np0005603663 lvm[259296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:49:31 np0005603663 lvm[259296]: VG ceph_vg2 finished
Jan 31 03:49:31 np0005603663 nova_compute[238824]: 2026-01-31 08:49:31.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:31 np0005603663 exciting_wozniak[259213]: {}
Jan 31 03:49:31 np0005603663 systemd[1]: libpod-9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c.scope: Deactivated successfully.
Jan 31 03:49:31 np0005603663 podman[259196]: 2026-01-31 08:49:31.373147615 +0000 UTC m=+0.817762551 container died 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:49:31 np0005603663 systemd[1]: var-lib-containers-storage-overlay-b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436-merged.mount: Deactivated successfully.
Jan 31 03:49:31 np0005603663 podman[259196]: 2026-01-31 08:49:31.415182024 +0000 UTC m=+0.859796960 container remove 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:49:31 np0005603663 systemd[1]: libpod-conmon-9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c.scope: Deactivated successfully.
Jan 31 03:49:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:49:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:49:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:49:31
Jan 31 03:49:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:49:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:49:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes']
Jan 31 03:49:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:49:32 np0005603663 nova_compute[238824]: 2026-01-31 08:49:32.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:32 np0005603663 nova_compute[238824]: 2026-01-31 08:49:32.342 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:49:32 np0005603663 nova_compute[238824]: 2026-01-31 08:49:32.343 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:49:32 np0005603663 nova_compute[238824]: 2026-01-31 08:49:32.390 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:49:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:49:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:49:33 np0005603663 nova_compute[238824]: 2026-01-31 08:49:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:34 np0005603663 nova_compute[238824]: 2026-01-31 08:49:34.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.352 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.374 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.374 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.374 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.375 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:49:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133995121' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:49:35 np0005603663 nova_compute[238824]: 2026-01-31 08:49:35.898 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.011 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.012 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.012 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.012 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.164 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.165 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.187 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:49:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803382574' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.733 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.737 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.751 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.753 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:49:36 np0005603663 nova_compute[238824]: 2026-01-31 08:49:36.753 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:37 np0005603663 nova_compute[238824]: 2026-01-31 08:49:37.734 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:38 np0005603663 podman[259382]: 2026-01-31 08:49:38.179114537 +0000 UTC m=+0.063843527 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:49:38 np0005603663 podman[259381]: 2026-01-31 08:49:38.209161011 +0000 UTC m=+0.096708582 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 31 03:49:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033324202763395413 of space, bias 1.0, pg target 0.09997260829018624 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.535042946262736e-06 of space, bias 4.0, pg target 0.003042051535515283 quantized to 16 (current 16)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:49:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:49:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 31 03:49:59 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 03:49:59 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 31 03:49:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 8.5 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 31 03:50:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 03:50:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 03:50:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 03:50:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 31 03:50:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 03:50:06 np0005603663 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 31 03:50:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 31 03:50:09 np0005603663 podman[259431]: 2026-01-31 08:50:09.153502718 +0000 UTC m=+0.038333583 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:50:09 np0005603663 podman[259430]: 2026-01-31 08:50:09.206112351 +0000 UTC m=+0.089627379 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:50:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 03:50:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:50:17.912 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:50:17.912 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:50:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:50:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2255035226' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:50:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:50:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2255035226' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:50:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:23 np0005603663 nova_compute[238824]: 2026-01-31 08:50:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:26 np0005603663 nova_compute[238824]: 2026-01-31 08:50:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:27 np0005603663 nova_compute[238824]: 2026-01-31 08:50:27.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:27 np0005603663 nova_compute[238824]: 2026-01-31 08:50:27.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:50:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:30 np0005603663 nova_compute[238824]: 2026-01-31 08:50:30.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:50:31
Jan 31 03:50:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:50:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:50:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta']
Jan 31 03:50:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:50:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:50:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:50:32 np0005603663 nova_compute[238824]: 2026-01-31 08:50:32.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.47984022 +0000 UTC m=+0.040724132 container create 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:32 np0005603663 systemd[1]: Started libpod-conmon-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope.
Jan 31 03:50:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.560217352 +0000 UTC m=+0.121101314 container init 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.463596723 +0000 UTC m=+0.024480655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.566901784 +0000 UTC m=+0.127785716 container start 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:50:32 np0005603663 trusting_wozniak[259634]: 167 167
Jan 31 03:50:32 np0005603663 systemd[1]: libpod-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope: Deactivated successfully.
Jan 31 03:50:32 np0005603663 conmon[259634]: conmon 6b04cccc3454259882a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope/container/memory.events
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.572414143 +0000 UTC m=+0.133298085 container attach 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.573298198 +0000 UTC m=+0.134182130 container died 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:50:32 np0005603663 systemd[1]: var-lib-containers-storage-overlay-0576fbec2df5fffcf1c643f7a252602f0d14f3de8f7e11e96cab66f4152601dd-merged.mount: Deactivated successfully.
Jan 31 03:50:32 np0005603663 podman[259618]: 2026-01-31 08:50:32.619306421 +0000 UTC m=+0.180190353 container remove 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:50:32 np0005603663 systemd[1]: libpod-conmon-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope: Deactivated successfully.
Jan 31 03:50:32 np0005603663 podman[259658]: 2026-01-31 08:50:32.766166055 +0000 UTC m=+0.049836914 container create 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:50:32 np0005603663 systemd[1]: Started libpod-conmon-0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003.scope.
Jan 31 03:50:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:32 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:50:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:32 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:32 np0005603663 podman[259658]: 2026-01-31 08:50:32.751764021 +0000 UTC m=+0.035434900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:50:32 np0005603663 podman[259658]: 2026-01-31 08:50:32.862625 +0000 UTC m=+0.146295879 container init 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:50:32 np0005603663 podman[259658]: 2026-01-31 08:50:32.872180965 +0000 UTC m=+0.155851824 container start 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 03:50:32 np0005603663 podman[259658]: 2026-01-31 08:50:32.877108186 +0000 UTC m=+0.160779045 container attach 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:50:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:50:33 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:50:33 np0005603663 wizardly_almeida[259675]: --> passed data devices: 0 physical, 3 LVM
Jan 31 03:50:33 np0005603663 wizardly_almeida[259675]: --> All data devices are unavailable
Jan 31 03:50:33 np0005603663 systemd[1]: libpod-0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003.scope: Deactivated successfully.
Jan 31 03:50:33 np0005603663 podman[259695]: 2026-01-31 08:50:33.35704108 +0000 UTC m=+0.025348040 container died 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:50:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59-merged.mount: Deactivated successfully.
Jan 31 03:50:33 np0005603663 podman[259695]: 2026-01-31 08:50:33.401923471 +0000 UTC m=+0.070230431 container remove 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:50:33 np0005603663 systemd[1]: libpod-conmon-0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003.scope: Deactivated successfully.
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.827932534 +0000 UTC m=+0.039176568 container create 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:33 np0005603663 systemd[1]: Started libpod-conmon-2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46.scope.
Jan 31 03:50:33 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.889107123 +0000 UTC m=+0.100351187 container init 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 03:50:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.894163769 +0000 UTC m=+0.105407823 container start 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.897603838 +0000 UTC m=+0.108847912 container attach 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:50:33 np0005603663 agitated_mahavira[259788]: 167 167
Jan 31 03:50:33 np0005603663 systemd[1]: libpod-2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46.scope: Deactivated successfully.
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.900601194 +0000 UTC m=+0.111845248 container died 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.812327375 +0000 UTC m=+0.023571459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:50:33 np0005603663 systemd[1]: var-lib-containers-storage-overlay-fc7f27acb6f001a18f3ecfb3f95c89e83af6d9df020a9348bd5ae43d0ffb0a1a-merged.mount: Deactivated successfully.
Jan 31 03:50:33 np0005603663 podman[259772]: 2026-01-31 08:50:33.93800329 +0000 UTC m=+0.149247334 container remove 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:50:33 np0005603663 systemd[1]: libpod-conmon-2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46.scope: Deactivated successfully.
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.070892792 +0000 UTC m=+0.040819985 container create a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:50:34 np0005603663 systemd[1]: Started libpod-conmon-a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410.scope.
Jan 31 03:50:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:50:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:34 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.13271251 +0000 UTC m=+0.102639713 container init a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.138674081 +0000 UTC m=+0.108601264 container start a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.141514023 +0000 UTC m=+0.111441296 container attach a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.055500339 +0000 UTC m=+0.025427542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:50:34 np0005603663 nova_compute[238824]: 2026-01-31 08:50:34.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:34 np0005603663 nova_compute[238824]: 2026-01-31 08:50:34.357 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:34 np0005603663 nova_compute[238824]: 2026-01-31 08:50:34.357 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:50:34 np0005603663 nova_compute[238824]: 2026-01-31 08:50:34.358 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:50:34 np0005603663 bold_lalande[259830]: {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:    "0": [
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:        {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "devices": [
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "/dev/loop3"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            ],
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_name": "ceph_lv0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_size": "21470642176",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "name": "ceph_lv0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "tags": {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cluster_name": "ceph",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.crush_device_class": "",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.encrypted": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.objectstore": "bluestore",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osd_id": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.type": "block",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.vdo": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.with_tpm": "0"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            },
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "type": "block",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "vg_name": "ceph_vg0"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:        }
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:    ],
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:    "1": [
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:        {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "devices": [
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "/dev/loop4"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            ],
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_name": "ceph_lv1",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_size": "21470642176",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "name": "ceph_lv1",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "tags": {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cluster_name": "ceph",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.crush_device_class": "",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.encrypted": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.objectstore": "bluestore",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osd_id": "1",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.type": "block",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.vdo": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.with_tpm": "0"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            },
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "type": "block",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "vg_name": "ceph_vg1"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:        }
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:    ],
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:    "2": [
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:        {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "devices": [
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "/dev/loop5"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            ],
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_name": "ceph_lv2",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_size": "21470642176",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "name": "ceph_lv2",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "tags": {
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.cluster_name": "ceph",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.crush_device_class": "",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.encrypted": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.objectstore": "bluestore",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osd_id": "2",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.type": "block",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.vdo": "0",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:                "ceph.with_tpm": "0"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            },
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "type": "block",
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:            "vg_name": "ceph_vg2"
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:        }
Jan 31 03:50:34 np0005603663 bold_lalande[259830]:    ]
Jan 31 03:50:34 np0005603663 bold_lalande[259830]: }
Jan 31 03:50:34 np0005603663 nova_compute[238824]: 2026-01-31 08:50:34.373 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:50:34 np0005603663 systemd[1]: libpod-a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410.scope: Deactivated successfully.
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.400680607 +0000 UTC m=+0.370607810 container died a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:50:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e-merged.mount: Deactivated successfully.
Jan 31 03:50:34 np0005603663 podman[259813]: 2026-01-31 08:50:34.445704982 +0000 UTC m=+0.415632205 container remove a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:50:34 np0005603663 systemd[1]: libpod-conmon-a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410.scope: Deactivated successfully.
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.825067893 +0000 UTC m=+0.032773113 container create 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:50:34 np0005603663 systemd[1]: Started libpod-conmon-948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0.scope.
Jan 31 03:50:34 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.886421218 +0000 UTC m=+0.094126458 container init 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.890946068 +0000 UTC m=+0.098651298 container start 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.894417458 +0000 UTC m=+0.102122698 container attach 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:50:34 np0005603663 determined_mahavira[259930]: 167 167
Jan 31 03:50:34 np0005603663 systemd[1]: libpod-948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0.scope: Deactivated successfully.
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.897572499 +0000 UTC m=+0.105277729 container died 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.810306399 +0000 UTC m=+0.018011649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:50:34 np0005603663 systemd[1]: var-lib-containers-storage-overlay-c73bba857782130bba137ff2eaa4f90494193e718519a0ced55eee113678035c-merged.mount: Deactivated successfully.
Jan 31 03:50:34 np0005603663 podman[259913]: 2026-01-31 08:50:34.93445767 +0000 UTC m=+0.142162900 container remove 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 03:50:34 np0005603663 systemd[1]: libpod-conmon-948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0.scope: Deactivated successfully.
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.070520353 +0000 UTC m=+0.052871792 container create ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 03:50:35 np0005603663 systemd[1]: Started libpod-conmon-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope.
Jan 31 03:50:35 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:50:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:35 np0005603663 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.047764069 +0000 UTC m=+0.030115558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.16915802 +0000 UTC m=+0.151509469 container init ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.175097781 +0000 UTC m=+0.157449190 container start ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.237064783 +0000 UTC m=+0.219416222 container attach ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 03:50:35 np0005603663 nova_compute[238824]: 2026-01-31 08:50:35.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:35 np0005603663 lvm[260049]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:50:35 np0005603663 lvm[260049]: VG ceph_vg0 finished
Jan 31 03:50:35 np0005603663 lvm[260050]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:50:35 np0005603663 lvm[260050]: VG ceph_vg1 finished
Jan 31 03:50:35 np0005603663 lvm[260052]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:50:35 np0005603663 lvm[260052]: VG ceph_vg2 finished
Jan 31 03:50:35 np0005603663 peaceful_kalam[259971]: {}
Jan 31 03:50:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:35 np0005603663 systemd[1]: libpod-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope: Deactivated successfully.
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.930934089 +0000 UTC m=+0.913285518 container died ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 03:50:35 np0005603663 systemd[1]: libpod-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope: Consumed 1.072s CPU time.
Jan 31 03:50:35 np0005603663 systemd[1]: var-lib-containers-storage-overlay-e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0-merged.mount: Deactivated successfully.
Jan 31 03:50:35 np0005603663 podman[259954]: 2026-01-31 08:50:35.983484041 +0000 UTC m=+0.965835450 container remove ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:35 np0005603663 systemd[1]: libpod-conmon-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope: Deactivated successfully.
Jan 31 03:50:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 03:50:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:50:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 03:50:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:50:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:50:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.336 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.370 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.370 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:37 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:50:37 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778539515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:50:37 np0005603663 nova_compute[238824]: 2026-01-31 08:50:37.880 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:37 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.015 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.016 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.016 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.016 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.298 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.362 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.362 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.376 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.396 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.428 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:38 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 03:50:38 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752485080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.967 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:38 np0005603663 nova_compute[238824]: 2026-01-31 08:50:38.973 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:50:39 np0005603663 nova_compute[238824]: 2026-01-31 08:50:39.005 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:50:39 np0005603663 nova_compute[238824]: 2026-01-31 08:50:39.006 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:50:39 np0005603663 nova_compute[238824]: 2026-01-31 08:50:39.007 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:39 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:40 np0005603663 podman[260136]: 2026-01-31 08:50:40.157007808 +0000 UTC m=+0.047790025 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:50:40 np0005603663 podman[260135]: 2026-01-31 08:50:40.181950546 +0000 UTC m=+0.072630370 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:50:41 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:41 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.546112324845754e-07 of space, bias 1.0, pg target 7.638336974537263e-05 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.7421629738588775e-06 of space, bias 4.0, pg target 0.003290595568630653 quantized to 16 (current 16)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 03:50:43 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:45 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:46 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:47 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:49 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:51 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:51 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:53 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:55 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:56 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:57 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:50:59 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:01 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:01 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:02 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:03 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:05 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:06 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:07 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:09 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:11 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:11 np0005603663 podman[260181]: 2026-01-31 08:51:11.16591205 +0000 UTC m=+0.058327162 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:51:11 np0005603663 podman[260182]: 2026-01-31 08:51:11.18099644 +0000 UTC m=+0.068049199 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:51:11 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:13 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:15 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:16 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:17 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:51:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:51:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:17 np0005603663 ovn_metadata_agent[154972]: 2026-01-31 08:51:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 03:51:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1866503327' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 03:51:18 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 03:51:18 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1866503327' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 03:51:18 np0005603663 systemd-logind[793]: New session 56 of user zuul.
Jan 31 03:51:18 np0005603663 systemd[1]: Started Session 56 of User zuul.
Jan 31 03:51:19 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:20 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:21 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14614 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:21 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 03:51:21 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822961792' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 03:51:21 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.790824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849482790852, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1666, "num_deletes": 509, "total_data_size": 2228630, "memory_usage": 2263264, "flush_reason": "Manual Compaction"}
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849482916768, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2184468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28682, "largest_seqno": 30347, "table_properties": {"data_size": 2177122, "index_size": 3840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 18135, "raw_average_key_size": 18, "raw_value_size": 2160325, "raw_average_value_size": 2262, "num_data_blocks": 173, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849342, "oldest_key_time": 1769849342, "file_creation_time": 1769849482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 126019 microseconds, and 3336 cpu microseconds.
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.916834) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2184468 bytes OK
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.916856) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.993468) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.993526) EVENT_LOG_v1 {"time_micros": 1769849482993514, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.993558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2220323, prev total WAL file size 2220323, number of live WAL files 2.
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.994373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2133KB)], [65(7208KB)]
Jan 31 03:51:22 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849482994469, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9565648, "oldest_snapshot_seqno": -1}
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5059 keys, 7739020 bytes, temperature: kUnknown
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849483165540, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7739020, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7705693, "index_size": 19585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 127996, "raw_average_key_size": 25, "raw_value_size": 7614494, "raw_average_value_size": 1505, "num_data_blocks": 804, "num_entries": 5059, "num_filter_entries": 5059, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.165823) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7739020 bytes
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.195829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.9 rd, 45.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 6094, records dropped: 1035 output_compression: NoCompression
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.195868) EVENT_LOG_v1 {"time_micros": 1769849483195853, "job": 36, "event": "compaction_finished", "compaction_time_micros": 171151, "compaction_time_cpu_micros": 16365, "output_level": 6, "num_output_files": 1, "total_output_size": 7739020, "num_input_records": 6094, "num_output_records": 5059, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849483196282, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849483197099, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.994215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:23 np0005603663 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:23 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:25 np0005603663 nova_compute[238824]: 2026-01-31 08:51:25.009 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:25 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:26 np0005603663 ovs-vsctl[260555]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 03:51:26 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:26 np0005603663 virtqemud[239124]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 03:51:26 np0005603663 virtqemud[239124]: hostname: compute-0
Jan 31 03:51:26 np0005603663 virtqemud[239124]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 03:51:26 np0005603663 virtqemud[239124]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 03:51:26 np0005603663 virtqemud[239124]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 03:51:27 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: cache status {prefix=cache status} (starting...)
Jan 31 03:51:27 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: client ls {prefix=client ls} (starting...)
Jan 31 03:51:27 np0005603663 lvm[260890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 03:51:27 np0005603663 lvm[260890]: VG ceph_vg0 finished
Jan 31 03:51:27 np0005603663 lvm[260912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 03:51:27 np0005603663 lvm[260912]: VG ceph_vg1 finished
Jan 31 03:51:27 np0005603663 lvm[260915]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 03:51:27 np0005603663 lvm[260915]: VG ceph_vg2 finished
Jan 31 03:51:27 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:27 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 03:51:28 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14620 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 03:51:28 np0005603663 nova_compute[238824]: 2026-01-31 08:51:28.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 03:51:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 03:51:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968346629' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 03:51:28 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14624 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:28 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 03:51:28 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:51:28 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561520865' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:51:29 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 03:51:29 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:29 np0005603663 ceph-mgr[75519]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 03:51:29 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: 2026-01-31T08:51:29.125+0000 7fcf0ed23640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 03:51:29 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: ops {prefix=ops} (starting...)
Jan 31 03:51:29 np0005603663 nova_compute[238824]: 2026-01-31 08:51:29.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:29 np0005603663 nova_compute[238824]: 2026-01-31 08:51:29.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:51:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 03:51:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154022046' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 03:51:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 03:51:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145512582' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 03:51:29 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: session ls {prefix=session ls} (starting...)
Jan 31 03:51:29 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:29 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 03:51:29 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254321740' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 03:51:30 np0005603663 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: status {prefix=status} (starting...)
Jan 31 03:51:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 03:51:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966179961' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 03:51:30 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:30 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 03:51:30 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303141896' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 03:51:30 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14642 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033206096' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 03:51:31 np0005603663 nova_compute[238824]: 2026-01-31 08:51:31.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032501781' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742990929' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 03:51:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:51:31
Jan 31 03:51:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:51:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 03:51:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'images']
Jan 31 03:51:31 np0005603663 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 03:51:31 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 03:51:31 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722635469' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 03:51:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 03:51:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76167573' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14654 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 03:51:32 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: 2026-01-31T08:51:32.429+0000 7fcf0ed23640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 03:51:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 03:51:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383974240' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:32 np0005603663 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:32 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 03:51:32 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547894075' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14660 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 524288 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 324.490722656s of 324.498870850s, submitted: 3
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 303104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 278528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 212992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 98304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc ms_handle_reset ms_handle_reset con 0x5603a3a9a000
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc handle_mgr_configure stats_period=5
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.305572510s of 300.562957764s, submitted: 90
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 671744 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 671744 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 671744 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 655360 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 655360 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.8 total, 600.0 interval#012Cumulative writes: 5591 writes, 24K keys, 5591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5591 writes, 826 syncs, 6.77 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 227 writes, 342 keys, 227 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 227 writes, 113 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.835449219s of 299.881591797s, submitted: 24
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [0,0,1])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [1])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 139264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.759403229s of 59.501785278s, submitted: 90
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 892928 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 17547264 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001648 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 130 ms_handle_reset con 0x5603a3420c00 session 0x5603a53c1880
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc69e000/0x0/0x4ffc00000, data 0x8c48f0/0x98c000, compress 0x0/0x0/0x0, omap 0x11604, meta 0x2bbe9fc), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 17514496 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17276928 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 17195008 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 17195008 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 131 ms_handle_reset con 0x5603a6264c00 session 0x5603a606da40
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030439 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc226000/0x0/0x4ffc00000, data 0xd380a6/0xe04000, compress 0x0/0x0/0x0, omap 0x11bc8, meta 0x2bbe438), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.325613022s of 13.250974655s, submitted: 45
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033053 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fc223000/0x0/0x4ffc00000, data 0xd39b25/0xe07000, compress 0x0/0x0/0x0, omap 0x11ea0, meta 0x2bbe160), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17219584 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc226000/0x0/0x4ffc00000, data 0xd39b02/0xe06000, compress 0x0/0x0/0x0, omap 0x11ea0, meta 0x2bbe160), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc226000/0x0/0x4ffc00000, data 0xd39b02/0xe06000, compress 0x0/0x0/0x0, omap 0x11ea0, meta 0x2bbe160), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 17203200 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033880 data_alloc: 218103808 data_used: 8479
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 133 ms_handle_reset con 0x5603a56cc800 session 0x5603a40f0a80
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16924672 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca25000/0x0/0x4ffc00000, data 0x53b6bf/0x607000, compress 0x0/0x0/0x0, omap 0x1219d, meta 0x2bbde63), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16924672 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993344 data_alloc: 218103808 data_used: 12540
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.058580399s of 10.925365448s, submitted: 46
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 133 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 16916480 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 16908288 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 135 ms_handle_reset con 0x5603a2ea1800 session 0x5603a606d340
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced4a/0x19d000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced27/0x19c000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced27/0x19c000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976492 data_alloc: 218103808 data_used: 12540
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced27/0x19c000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000028s
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 nova_compute[238824]: 2026-01-31 08:51:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 429.654937744s of 431.133453369s, submitted: 53
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 16572416 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982061 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 24797184 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 137 ms_handle_reset con 0x5603a56cb800 session 0x5603a603f6c0
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24485888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24485888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 138 ms_handle_reset con 0x5603a62be400 session 0x5603a603fc00
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba15000/0x0/0x4ffc00000, data 0x1542398/0x1615000, compress 0x0/0x0/0x0, omap 0x12e9f, meta 0x2bbd161), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24461312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097417 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x1543f34/0x1618000, compress 0x0/0x0/0x0, omap 0x1312a, meta 0x2bbced6), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x1543f34/0x1618000, compress 0x0/0x0/0x0, omap 0x1312a, meta 0x2bbced6), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097417 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x1543f34/0x1618000, compress 0x0/0x0/0x0, omap 0x1312a, meta 0x2bbced6), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.541514397s of 15.457092285s, submitted: 25
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 24264704 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 139 ms_handle_reset con 0x5603a3a4b000 session 0x5603a604c540
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034016 data_alloc: 218103808 data_used: 16601
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 23199744 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 23199744 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 23199744 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 140 ms_handle_reset con 0x5603a41e8c00 session 0x5603a6100700
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd76be/0x1ab000, compress 0x0/0x0/0x0, omap 0x13686, meta 0x2bbc97a), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997178 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd76be/0x1ab000, compress 0x0/0x0/0x0, omap 0x13686, meta 0x2bbc97a), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.8 total, 600.0 interval#012Cumulative writes: 6134 writes, 25K keys, 6134 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6134 writes, 1062 syncs, 5.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 543 writes, 1575 keys, 543 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s#012Interval WAL: 543 writes, 236 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd76be/0x1ab000, compress 0x0/0x0/0x0, omap 0x13686, meta 0x2bbc97a), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997178 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.392799377s of 12.520668983s, submitted: 48
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 23166976 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 23166976 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc ms_handle_reset ms_handle_reset con 0x5603a3817000
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: mgrc handle_mgr_configure stats_period=5
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.516902924s of 57.524673462s, submitted: 14
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 21733376 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 21487616 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.8 total, 600.0 interval#012Cumulative writes: 6379 writes, 26K keys, 6379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6379 writes, 1179 syncs, 5.41 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 245 writes, 416 keys, 245 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s#012Interval WAL: 245 writes, 117 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.984802246s of 600.752502441s, submitted: 114
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 21291008 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [0,0,0,0,1])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 21118976 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.280435562s of 22.907487869s, submitted: 90
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 19955712 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000970 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 19931136 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 ms_handle_reset con 0x5603a4649000 session 0x5603a606c700
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca0d000/0x0/0x4ffc00000, data 0x54914d/0x61f000, compress 0x0/0x0/0x0, omap 0x13c58, meta 0x2bbc3a8), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 65.113044739s of 65.703971863s, submitted: 12
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 143 ms_handle_reset con 0x5603a3a4d800 session 0x5603a53c0e00
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 19898368 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fca0a000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13d7c, meta 0x2bbc284), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdc8c9/0x1b4000, compress 0x0/0x0/0x0, omap 0x143bc, meta 0x2bbbc44), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008610 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdc8c9/0x1b4000, compress 0x0/0x0/0x0, omap 0x143bc, meta 0x2bbbc44), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 18718720 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'config diff' '{prefix=config diff}'
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'config show' '{prefix=config show}'
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 18300928 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18022400 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 03:51:33 np0005603663 ceph-osd[88096]: do_command 'log dump' '{prefix=log dump}'
Jan 31 03:51:33 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 03:51:33 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315195756' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:33 np0005603663 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:51:33 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:34 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616624009' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 03:51:34 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 03:51:34 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407287328' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 03:51:34 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 03:51:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/277069709' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 03:51:35 np0005603663 nova_compute[238824]: 2026-01-31 08:51:35.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:35 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:35 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 03:51:35 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529536462' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 03:51:35 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14682 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 03:51:35 np0005603663 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:36 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569238403' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 03:51:36 np0005603663 nova_compute[238824]: 2026-01-31 08:51:36.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:36 np0005603663 nova_compute[238824]: 2026-01-31 08:51:36.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:51:36 np0005603663 nova_compute[238824]: 2026-01-31 08:51:36.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:51:36 np0005603663 nova_compute[238824]: 2026-01-31 08:51:36.374 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 03:51:36 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14690 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 31 03:51:36 np0005603663 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215502721' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 03:51:37 np0005603663 podman[262473]: 2026-01-31 08:51:37.095223955 +0000 UTC m=+0.024027076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 03:51:37 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14692 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 03:51:37 np0005603663 podman[262473]: 2026-01-31 08:51:37.323764535 +0000 UTC m=+0.252567646 container create 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:51:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 03:51:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 03:51:37 np0005603663 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 03:51:37 np0005603663 systemd[1]: Started libpod-conmon-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope.
Jan 31 03:51:37 np0005603663 systemd[1]: Started libcrun container.
Jan 31 03:51:37 np0005603663 podman[262473]: 2026-01-31 08:51:37.639937432 +0000 UTC m=+0.568740553 container init 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:51:37 np0005603663 podman[262473]: 2026-01-31 08:51:37.64691274 +0000 UTC m=+0.575715841 container start 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 0 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 0 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 0 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1040384 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1040384 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1032192 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1032192 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1024000 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1024000 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1024000 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1015808 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1015808 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1007616 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 999424 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 999424 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 991232 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 991232 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 983040 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 974848 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 974848 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 966656 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 966656 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 958464 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 958464 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 958464 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 942080 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 296.841705322s of 296.850128174s, submitted: 4
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1105920 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 2318336 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 2318336 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 2318336 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 2310144 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 2310144 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 2310144 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 2301952 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 2301952 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 2285568 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 2285568 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 2285568 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 2277376 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 2277376 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 2269184 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 2269184 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 2269184 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 2260992 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 2260992 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 2252800 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 2252800 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 2252800 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 2220032 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 2220032 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 2220032 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 2211840 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 2211840 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 2203648 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 2203648 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 2203648 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 2195456 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 2195456 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 conmon[262514]: conmon 3be3ced4a2a00120c0ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope/container/memory.events
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 systemd[1]: libpod-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope: Deactivated successfully.
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 agitated_sammet[262514]: 167 167
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 ms_handle_reset con 0x55d782c66400 session 0x55d782980e00
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 ms_handle_reset con 0x55d782c67000 session 0x55d782f1a380
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.297973633s of 300.506713867s, submitted: 90
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 podman[262473]: 2026-01-31 08:51:37.695641929 +0000 UTC m=+0.624445080 container attach 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:51:37 np0005603663 podman[262473]: 2026-01-31 08:51:37.696009889 +0000 UTC m=+0.624813010 container died 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 7056 writes, 29K keys, 7056 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7056 writes, 1347 syncs, 5.24 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 2072576 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.966552734s of 300.005035400s, submitted: 22
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.129043579s of 59.412487030s, submitted: 90
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x133408/0x1fa000, compress 0x0/0x0/0x0, omap 0x145f0, meta 0x2bbba10), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fce2f000/0x0/0x4ffc00000, data 0x133410/0x1fb000, compress 0x0/0x0/0x0, omap 0x145f0, meta 0x2bbba10), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 10600448 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063859 data_alloc: 218103808 data_used: 6999
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 18980864 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 19111936 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 130 ms_handle_reset con 0x55d784a97400 session 0x55d784e8ba40
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 18980864 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 10551296 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 18931712 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fb1b8000/0x0/0x4ffc00000, data 0x1da6bd3/0x1e72000, compress 0x0/0x0/0x0, omap 0x150e3, meta 0x2bbaf1d), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176756 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 18931712 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 131 ms_handle_reset con 0x55d784a97800 session 0x55d784a401c0
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb1b4000/0x0/0x4ffc00000, data 0x1da87ae/0x1e76000, compress 0x0/0x0/0x0, omap 0x154ad, meta 0x2bbab53), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 03:51:37 np0005603663 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: 2026-01-31T08:51:37.723+0000 7fcf0ed23640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 03:51:37 np0005603663 ceph-mgr[75519]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180902 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.004181862s of 13.245240211s, submitted: 48
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb1b1000/0x0/0x4ffc00000, data 0x1daa22d/0x1e79000, compress 0x0/0x0/0x0, omap 0x15720, meta 0x2bba8e0), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183036 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb1ae000/0x0/0x4ffc00000, data 0x1dabe1d/0x1e7c000, compress 0x0/0x0/0x0, omap 0x15bc3, meta 0x2bba43d), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 133 ms_handle_reset con 0x55d784a97c00 session 0x55d782c948c0
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 18669568 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121976 data_alloc: 218103808 data_used: 7034
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fbe21000/0x0/0x4ffc00000, data 0x113bdfa/0x120b000, compress 0x0/0x0/0x0, omap 0x15fd9, meta 0x2bba027), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 18653184 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fbe21000/0x0/0x4ffc00000, data 0x113bdfa/0x120b000, compress 0x0/0x0/0x0, omap 0x15fd9, meta 0x2bba027), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.201231003s of 10.794104576s, submitted: 57
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 133 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 18653184 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 18636800 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 18636800 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fbe1f000/0x0/0x4ffc00000, data 0x113d9da/0x120d000, compress 0x0/0x0/0x0, omap 0x162e4, meta 0x2bb9d1c), peers [0,2] op hist [0,0,1])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 135 ms_handle_reset con 0x55d783df2000 session 0x55d78517b880
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 18767872 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046147 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fce1a000/0x0/0x4ffc00000, data 0x13f475/0x210000, compress 0x0/0x0/0x0, omap 0x16729, meta 0x2bb98d7), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 18898944 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000027s
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 429.815399170s of 431.936645508s, submitted: 61
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 17825792 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 137 ms_handle_reset con 0x55d784a97800 session 0x55d78461ba40
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 16777216 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 25108480 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x942a90/0xa16000, compress 0x0/0x0/0x0, omap 0x17227, meta 0x2bb8dd9), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 138 ms_handle_reset con 0x55d781e85400 session 0x55d784246700
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100919 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc610000/0x0/0x4ffc00000, data 0x94464f/0xa1a000, compress 0x0/0x0/0x0, omap 0x17606, meta 0x2bb89fa), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100919 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc610000/0x0/0x4ffc00000, data 0x94464f/0xa1a000, compress 0x0/0x0/0x0, omap 0x17606, meta 0x2bb89fa), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100919 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.652050018s of 14.759039879s, submitted: 27
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 139 ms_handle_reset con 0x55d781e85000 session 0x55d783d97180
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 24051712 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 7626 writes, 30K keys, 7626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7626 writes, 1597 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 570 writes, 1601 keys, 570 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s#012Interval WAL: 570 writes, 250 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 24051712 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc60d000/0x0/0x4ffc00000, data 0x94623f/0xa1d000, compress 0x0/0x0/0x0, omap 0x17ca5, meta 0x2bb835b), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 24051712 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 140 ms_handle_reset con 0x55d784ba7400 session 0x55d784c10c40
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 23945216 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064249 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 23945216 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 23945216 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064249 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: mgrc ms_handle_reset ms_handle_reset con 0x55d782260000
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: mgrc handle_mgr_configure stats_period=5
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.615085602s of 11.834397316s, submitted: 74
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d7806c5000 session 0x55d782f1a540
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d782c66c00 session 0x55d784675dc0
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d782c66400 session 0x55d784a90a80
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d78283fc00 session 0x55d782c95c00
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.195083618s of 58.221981049s, submitted: 15
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 03:51:37 np0005603663 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
